I'm trying to get a geo search to work via an association. Very similar to this fellow:
How do I geo-search multiple models with ThinkingSphinx?
The one difference is that I'm trying to combine the association syntax with custom SQL syntax. It doesn't print any errors on indexing, but when I try to search it fails:
class Person
define_index do
indexes tags(:text), :as => :tags
has media.location.lat('RADIANS(location.lat)'), :as => :lat, :type => :float
has media.location.lng('RADIANS(location.lng)'), :as => :lng, :type => :float
end
sphinx_scope(:by_location) { |loc|
{ :geo => [loc.lat.to_radians, loc.lng.to_radians],
:with => {"#geodist" => 0.0..loc.radius },
:latitude_attr => "lat",
:longitude_attr => "lng"
}
}
end
#running this search from console
Person.by_location(Location.first)
This is the error:
ThinkingSphinx::SphinxError: index fulfillment_core: unknown latitude attribute 'lat'
I've tried configuring it without the SQL string - this runs without error, but the math is of course totally wrong as it's trying to do radian operations on degrees.
Is there any way to combine the conversion and the association, or am I stuck storing my data as radians instead of degrees?
Your index definition isn't quite right. Try the following:
define_index do
indexes tags(:text), :as => :tags
# forces the join on media and location associations
join media.location
has 'RADIANS(location.lat)', :as => :lat, :type => :float
has 'RADIANS(location.lng)', :as => :lng, :type => :float
end
Related
I have created a script that I want to use to populate a new table in another database, as I'm moving this table out of one DB(table_1) and into another DB(db_2). I've already created the 'new_geo_fence' table in the new DB (db_2) and want to have the script run below to migrate data over. Script below:
class NewGeoFence < ActiveRecord::Base
attr_accessible :name, :address, :latitude, :longitude, :radius, :customer_id
belongs_to :customer, :foreign_key => 'customer_id'
end
require 'rubygems'
GeoFence.all.each do |g|
gf = NewGeoFence.new(
:id => g.id,
:name => g.name,
:address => g.address,
:latitude => g.latitude,
:longitude => g.longitude,
:radius => g.radius,
:created_at => g.created_at,
:updated_at => g.updated_at,
:customer_id => g.customer_id
)
gf.save
end
However, when I run it, I get this error:
/activerecord-4.0.13/lib/active_record/attribute_assignment.rb:47:in `rescue in _assign_attribute': unknown attribute: customer_id (ActiveRecord::UnknownAttributeError)
What have I missed to get this script running?
Thanks!
You're calling each on a class when you should be calling it on an array of objects, so
GeoFence.all.each do |g|
Rails 4 requires parameters to be whitelisted when doing mass assignment. To do so, use strong parameters
GeoFence.all.each do |g|
params = ActionController::Parameters.new({
geofence: {
id: g.id,
name: g.name,
address: g.address,
latitude: g.latitude,
longitude: g.longitude,
radius: g.radius,
created_at: g.created_at,
updated_at: g.updated_at,
customer_id: g.customer_id
}
})
gf = NewGeoFence.new(params.require(:geofence).permit!)
gf.save
end
Hey I am fairly new to web programming, so please excuse my ignorance.
I have the following code in a .html.haml view file.
%video{:controls => "controls", :height => "240", :width => "320"}
%source{:src => #video_resource.file_path, :type => "video/mp4"}/
%source{:src => #video_resource.file_path, :type => "video/webm"}/
%source{:src => #video_resource.file_path, :type => "video/ogg"}/
%object{:data => "flashmediaelement.swf", :height => "240", :type => "application/x-shockwave-flash", :width => "320"}
%param{:name => "movie", :value => "flashmediaelement.swf"}/
%param{:name => "flashvars", :value => "controls=true&file=movie.url"}/
= t('html5_video_error')
How do I get #video_resource.file_path into the flashvars object param file variable (currently it is set to movie.url). I may be misunderstanding the way this works.
What you need here is basic string interpolation, the ruby on rails syntax is the following:
# haml code:
%span= "I am a #{ 'interpolated' } string"
# will render:
<span>I am a interpolated string</span>
In your case:
%param{:name => "flashvars", :value => "controls=true&file=#{#video_resource.file_path}"}
^^ ^
I am trying to define the following index on my Category model:
define_index do
has document.author.name :as => :author_name, :facet => true
end
My Model definitions are:
class Category < ActiveRecord::Base
has_many: documents
end
class Author ActiveRecord::Base
has_many :documents
end
class Document ActiveRecord::Base
belongs_to :category
belongs_to :author
end
A category may or may not have a document associated with it - depends on the category, many categories can exist without any documents.
The problem is when I try to run the indexer I get:
Cannot automatically map column type NilClass to an equivalent Sphinx
type (integer, float, boolean, datetime, string as ordinal). You could try to
explicitly convert the column's value in your define_index block:
has "CAST(column AS INT)", :type => :integer, :as => :column
Has anyone run into this issue?
define_index do
# firstly, you must have at least one indexed column
indexes document.author.name :as => :author_name, :facet => true
# to add 'has' for string you crc32
# has "CRC32(string_col)", :as => :filtered_string_col
end
if you need to search on that 'has' col:
:conditions => { "string to filter on".to_crc32 }
I want to return about 90k items in a JSON document but I'm getting this error when I make the call:
Timeout::Error in ApisController#api_b
time's up!
Rails.root: /root/api_b
I am simply running "rails s" with the default rails server.
What's the way to make this work and return the document?
Thanks
#bs.each do |a|
puts "dentro do bs.each"
#final << { :Email => a['headers']['to'], :At => a['date'], :subject => a['headers']['subject'], :Type => a['headers']['status'], :Message_id => a['headers']['message_id'] }
end
Being #bs the BSON object from MongoDB. The timeout is in "#final << ..."
If you are experiencing timeouts from rails and it is possible to cache the data (e.g. the data changes infrequently), I would generate the response in the background using resque or delayed_job and than have Rails dump that to the client. Or if the data cannot be cached, use a lightweight Rack handler like Sinatra and Metal to generate the responses.
Edited to reflect sample data
I was able to run the following code in a Rails 3.0.9 instance against a high performance Mongo 1.8.4 instance. I was using Mongo 1.3.1, bson_ext 1.3.1, webrick 1.3.1 and Ruby 1.9.2p180 x64. It did not time out but it took some time to load. My sample Mongo DB has 100k records and contains no indexes.
before_filter :profile_start
after_filter :profile_end
def index
db = #conn['sample-dbs']
collection = db['email-test']
#final = []
#bs = collection.find({})
#bs.each do |a|
puts "dentro do bs.each"
#final << { :Email => a['headers']['to'], :At => a['date'], :subject => a['headers']['subject'], :Type => a['headers']['status'], :Message_id => a['headers']['message_id'] }
end
render :json => #final
end
private
def profile_start
RubyProf.start
end
def profile_end
RubyProf::FlatPrinter.new(RubyProf.stop).print
end
A more efficient way to dump out the records would be
#bs = collection.find({}, {:fields => ["headers", "date"]})
#final = #bs.map{|a| {:Email => a['headers']['to'], :At => a['date'], :subject => a['headers']['subject'], :Type => a['headers']['status'], :Message_id => a['headers']['message_id'] }}
render :json => #final
My data generator
100000.times do |i|
p i
#coll.insert({:date =>Time.now(),:headers => {"to"=>"me#foo.com", "subject"=>"meeeeeeeeee", "status" => "ffffffffffffffffff", "message_id" => "1234634673"}})
end
I've been deploying some apps to Heroku recently. I run MySQL on my local dev machine and have spent a little while updating some of my scopes to work in PostgreSQL. However one i have received an error on is proving difficult to change.
For the time being i've got a database specific case statement in my model. I understand why the error regarding the MySQL date functions is occurring, but im not sure if this is the most efficient solution. Does anyone have a better way of implementing a fix that will work with both MySQL and PostgreSQL?
case ActiveRecord::Base.connection.adapter_name
when 'PostgreSQL'
named_scope :by_year, lambda { |*args| {:conditions => ["published = ? AND (date_part('year', created_at) = ?)", true, (args.first)], :order => "created_at DESC"} }
named_scope :by_month, lambda { |*args| {:conditions => ["published = ? AND (date_part('month', created_at) = ?)", true, (args.first)], :order => "created_at DESC"} }
named_scope :by_day, lambda { |*args| {:conditions => ["published = ? AND (date_part('day', created_at) = ?)", true, (args.first)], :order => "created_at DESC"} }
else
named_scope :by_year, lambda { |*args| {:conditions => ["published = ? AND (YEAR(created_at) = ?)", true, (args.first)], :order => "created_at DESC"} }
named_scope :by_month, lambda { |*args| {:conditions => ["published = ? AND (MONTH(created_at) = ?)", true, (args.first)], :order => "created_at DESC"} }
named_scope :by_day, lambda { |*args| {:conditions => ["published = ? AND (DAY(created_at) = ?)", true, (args.first)], :order => "created_at DESC"} }
end
FYI, this is the PostgreSQL error that i am getting:
PGError: ERROR: function month(timestamp without time zone) does not exist LINE 1: ...T * FROM "articles" WHERE (((published = 't' AND (MONTH(crea... ^ HINT: No function matches the given name and argument types. You might need to add explicit type casts. : SELECT * FROM "articles" WHERE (((published = 't' AND (MONTH(created_at) = '11')) AND (published = 't' AND (YEAR(created_at) = '2010'))) AND ("articles"."published" = 't')) ORDER BY created_at DESC LIMIT 5 OFFSET 0
Thanks in advance for any input anyone has.
You should be using the standard EXTRACT function:
named_scope :by_year, lambda { |*args| {:conditions => ["published = ? AND (extract(year from created_at) = ?)", true, (args.first)], :order => "created_at DESC"} }
Both PostgresSQL and MySQL support it.
Unfortunately this happens alot, however you have the general right idea.
Your first method of attack is to see if there is a function that exists both in MySQL and Postres, however this isn't possible in this case.
The one suggestion I would make is that there is a lot of code duplication in this solution. Considering the condition statement is the only compatible issue here, I would factor out the compatiablity check only for the condition:
Example (Semi-Psuedo Code):
named_scope :by_year, lambda { |*args| {:conditions => ["published = ? AND (#{by_year_condition} = ?)", true, (args.first)], :order => "created_at DESC"} }
#...code...
def by_year_condition
if postgres
"date_part('year', created_at)"
else
"YEAR(created_at)"
end
Another option would be to create computed columns for each of your date parts (day, month, and year) and to query directly against those. You could keep them up to date with your model code or with triggers. You'll also get the benefit of being able to index on various combinations on your year, month, and day columns. Databases are notoriously bad at correctly using indexes when you use a function in the where clause, especially when that function is pulling out a portion of data from the middle of the column.
The upside of having three separate columns is that your query will no longer rely on any vendor's implementations of SQL.