I want to create two indices for the same model and search separately
I am using
gem 'thinking-sphinx', '3.2.0'
gem 'riddle', '1.5.11'
ThinkingSphinx::Index.define :product, :with => :active_record, :delta => ThinkingSphinx::Deltas::DelayedDelta do
indexes :field_a
end
ThinkingSphinx::Index.define :product, :name => "active_product", :with => :active_record, :delta => ThinkingSphinx::Deltas::DelayedDelta do
indexes :field_a
where "(active = 1)"
end
when i tried to search this way to get only the active products
Product.search_for_ids "", :match_mode => :extended, :index => "active_product_core, active_product_delta", :page => params[:page], :per_page => 50, :sort_mode => :extended, :order => "field_a desc"
But it is running query like this and listing all products
SELECT * FROM `product_core`, `product_delta` WHERE `sphinx_deleted` = 0 ORDER BY `field_a` desc LIMIT 0, 50 OPTION max_matches=50000
How can i get only the active products or to make sure query runs like this?
SELECT * FROM `active_product_core`, `active_product_delta` WHERE `sphinx_deleted` = 0 ORDER BY `field_a` desc LIMIT 0, 50 OPTION max_matches=50000
Note: Above feature was working fine in Thinking sphinx version 2
gem 'thinking-sphinx', '2.0.14'
gem 'riddle', '1.5.3'
In TS v3, the search option is now :indices rather than :index, and expects an array of index names. So, try the following:
Product.search_for_ids(
:indices => ["active_product_core", "active_product_delta"],
:page => params[:page],
:per_page => 50,
:order => "field_a desc"
)
I've removed :sort_mode and :match_mode from the options you were using - the extended approaches are the only approaches available with Sphinx's SphinxQL protocol (and that's what TS v3 uses), so you don't need to specify them.
Related
I'm not using sqlite3 gem
I'm using mysql2 gem
I'm retrieving data from MySQL database given that it meets the condition of a certain event type and severity. However, it returns only one row instead of an array of results. It really puzzles me. Shouldnt .map return an array?
result = connect.query("SELECT * FROM data WHERE event_type = 'ALARM_OPENED' AND severity = '2'")
equipments = result.map do |record|
[
record['sourcetime'].strftime('%H:%M:%S'),
record['equipment_id'],
record['description']
]
end
p equipments
I had misread your question...I think what you are looking for is in here.
UPDATE
You can use each instead, like this:
#!/usr/bin/env ruby
require 'mysql2'
connect= Mysql2::Client.new(:host => '', :username => '', :password => '', :database => '')
equipments = []
result = connect.query("SELECT * FROM data WHERE event_type = 'ALARM_OPENED' AND severity = '2'", :symbolize_keys => true).each do |row|
equipments << [
row[:sourcetime].strftime('%H:%M:%S'),
row[:equipment_id],
row[:description]
]
end
puts "#equipments {equipments}"
EDITED:
I forgot to add .each at the end of the query. So it was returning the initialized empty array instead.
You must need to change your sql statement :
result = connect.query("SELECT * FROM data WHERE event_type = 'ALARM_OPENED' AND severity = '2'", :as => :array)
I want to return about 90k items in a JSON document but I'm getting this error when I make the call:
Timeout::Error in ApisController#api_b
time's up!
Rails.root: /root/api_b
I am simply running "rails s" with the default rails server.
What's the way to make this work and return the document?
Thanks
#bs.each do |a|
puts "dentro do bs.each"
#final << { :Email => a['headers']['to'], :At => a['date'], :subject => a['headers']['subject'], :Type => a['headers']['status'], :Message_id => a['headers']['message_id'] }
end
Being #bs the BSON object from MongoDB. The timeout is in "#final << ..."
If you are experiencing timeouts from rails and it is possible to cache the data (e.g. the data changes infrequently), I would generate the response in the background using resque or delayed_job and than have Rails dump that to the client. Or if the data cannot be cached, use a lightweight Rack handler like Sinatra and Metal to generate the responses.
Edited to reflect sample data
I was able to run the following code in a Rails 3.0.9 instance against a high performance Mongo 1.8.4 instance. I was using Mongo 1.3.1, bson_ext 1.3.1, webrick 1.3.1 and Ruby 1.9.2p180 x64. It did not time out but it took some time to load. My sample Mongo DB has 100k records and contains no indexes.
before_filter :profile_start
after_filter :profile_end
def index
db = #conn['sample-dbs']
collection = db['email-test']
#final = []
#bs = collection.find({})
#bs.each do |a|
puts "dentro do bs.each"
#final << { :Email => a['headers']['to'], :At => a['date'], :subject => a['headers']['subject'], :Type => a['headers']['status'], :Message_id => a['headers']['message_id'] }
end
render :json => #final
end
private
def profile_start
RubyProf.start
end
def profile_end
RubyProf::FlatPrinter.new(RubyProf.stop).print
end
A more efficient way to dump out the records would be
#bs = collection.find({}, {:fields => ["headers", "date"]})
#final = #bs.map{|a| {:Email => a['headers']['to'], :At => a['date'], :subject => a['headers']['subject'], :Type => a['headers']['status'], :Message_id => a['headers']['message_id'] }}
render :json => #final
My data generator
100000.times do |i|
p i
#coll.insert({:date =>Time.now(),:headers => {"to"=>"me#foo.com", "subject"=>"meeeeeeeeee", "status" => "ffffffffffffffffff", "message_id" => "1234634673"}})
end
I've been deploying some apps to Heroku recently. I run MySQL on my local dev machine and have spent a little while updating some of my scopes to work in PostgreSQL. However one i have received an error on is proving difficult to change.
For the time being i've got a database specific case statement in my model. I understand why the error regarding the MySQL date functions is occurring, but im not sure if this is the most efficient solution. Does anyone have a better way of implementing a fix that will work with both MySQL and PostgreSQL?
case ActiveRecord::Base.connection.adapter_name
when 'PostgreSQL'
named_scope :by_year, lambda { |*args| {:conditions => ["published = ? AND (date_part('year', created_at) = ?)", true, (args.first)], :order => "created_at DESC"} }
named_scope :by_month, lambda { |*args| {:conditions => ["published = ? AND (date_part('month', created_at) = ?)", true, (args.first)], :order => "created_at DESC"} }
named_scope :by_day, lambda { |*args| {:conditions => ["published = ? AND (date_part('day', created_at) = ?)", true, (args.first)], :order => "created_at DESC"} }
else
named_scope :by_year, lambda { |*args| {:conditions => ["published = ? AND (YEAR(created_at) = ?)", true, (args.first)], :order => "created_at DESC"} }
named_scope :by_month, lambda { |*args| {:conditions => ["published = ? AND (MONTH(created_at) = ?)", true, (args.first)], :order => "created_at DESC"} }
named_scope :by_day, lambda { |*args| {:conditions => ["published = ? AND (DAY(created_at) = ?)", true, (args.first)], :order => "created_at DESC"} }
end
FYI, this is the PostgreSQL error that i am getting:
PGError: ERROR: function month(timestamp without time zone) does not exist LINE 1: ...T * FROM "articles" WHERE (((published = 't' AND (MONTH(crea... ^ HINT: No function matches the given name and argument types. You might need to add explicit type casts. : SELECT * FROM "articles" WHERE (((published = 't' AND (MONTH(created_at) = '11')) AND (published = 't' AND (YEAR(created_at) = '2010'))) AND ("articles"."published" = 't')) ORDER BY created_at DESC LIMIT 5 OFFSET 0
Thanks in advance for any input anyone has.
You should be using the standard EXTRACT function:
named_scope :by_year, lambda { |*args| {:conditions => ["published = ? AND (extract(year from created_at) = ?)", true, (args.first)], :order => "created_at DESC"} }
Both PostgresSQL and MySQL support it.
Unfortunately this happens alot, however you have the general right idea.
Your first method of attack is to see if there is a function that exists both in MySQL and Postres, however this isn't possible in this case.
The one suggestion I would make is that there is a lot of code duplication in this solution. Considering the condition statement is the only compatible issue here, I would factor out the compatiablity check only for the condition:
Example (Semi-Psuedo Code):
named_scope :by_year, lambda { |*args| {:conditions => ["published = ? AND (#{by_year_condition} = ?)", true, (args.first)], :order => "created_at DESC"} }
#...code...
def by_year_condition
if postgres
"date_part('year', created_at)"
else
"YEAR(created_at)"
end
Another option would be to create computed columns for each of your date parts (day, month, and year) and to query directly against those. You could keep them up to date with your model code or with triggers. You'll also get the benefit of being able to index on various combinations on your year, month, and day columns. Databases are notoriously bad at correctly using indexes when you use a function in the where clause, especially when that function is pulling out a portion of data from the middle of the column.
The upside of having three separate columns is that your query will no longer rely on any vendor's implementations of SQL.
Right now I have only one condition in my Projects.paginate
Code is below
def list
#projects = Project.paginate(:page => params[:page], :per_page => 100, :order => (sort_column + ' ' + arrow), :conditions => ["description LIKE ?", "%#{query}%"])
I want to put another condition here but its is proving to be difficult. I'v tried
#projects = Project.paginate(:page => params[:page], :per_page => 100, :order => (sort_column + ' ' + arrow), :conditions => ["description OR name LIKE ?", "%#{query}%"])
but im getting a bind error from the SQL controller. Any ideas? I cant use the = sign either.
You need to have two bind variables in your conditions array:
qt = "%#{query}%"
#projects = Project.paginate(:conditions =>
["description LIKE ? OR name LIKE ?", qt, qt], ..)
I have the following tables:
User :has_many Purchases
Item :has_many Purchases
Item has a column "amount" (can be + or -) and I need to find all Users that have a positive SUM of "Item.amounts" (over all Purchases each one has made).
How does this query look like? (I'm not sure how to handle "SUM" correctly, in this case.)
I started out with the following, but obviously, it's wrong... (it wouldn't "include" Purchases that have an Item with a negative Item.amount...)
#users = User.find(:all,
:include => {:purchases => :item},
:select => "SUM(item.amount)",
:order => "...",
:conditions => "...",
:group => "users.id",
:having => "SUM(item.amount) > 0"
)
Thanks for your help with this!
Tom
Try this:
User.all(:joins => items, :group => "users.id",
:having => "SUM(items.amount) > 0")
It sounds like this is a good case for some model methods.
I didn't test this but I think you want to do something similar to the following:
class User < ActiveRecord::Base
has_many :purchases
has_many :items, :through => :purchases
def items_total
#get all the items iterate over them to get the amount,
#compact to get rid of nils
#and reduce with a sum function to total and return
items.all.each{|item| item.amount}.compact.reduce(:+)
end
then
User.items_total