I've created a JSON api using Elixir and the Phoenix
I have an endpoint for a create action in my controller that takes json data which looks like this:
[{"opens_detail"=>
[{"ua"=>"Linux/Ubuntu/Chrome/Chrome 28.0.1500.53",
"ip"=>"55.55.55.55",
"ts"=>1365190001,
"location"=>"Georgia, US"}],
"template"=>"example-template",
"metadata"=>{"user_id"=>"123", "website"=>"www.example.com"},
"clicks"=>42,
"ts"=>1365190000,
"state"=>"sent",
"clicks_detail"=>
[{"ua"=>"Linux/Ubuntu/Chrome/Chrome 28.0.1500.53",
"ip"=>"55.55.55.55",
"ts"=>1365190001,
"url"=>"http://www.example.com",
"location"=>"Georgia, US"}],
"email"=>"recipient.email#example.com",
"subject"=>"example subject",
"sender"=>"sender#example.com",
"_id"=>"abc123abc123abc123abc123",
"tags"=>["password-reset"],
"opens"=>42}]
My goal is to take this json and create a new one from it where some keys and values are renamed to match my schema below:
in web/models/messages.ex
...
schema "messages" do
field :sender, :string
field :uniq_id, :string # equal to '_id' in the payload
field :ts, :utc_datetime
field :template, :string
field :subject, :string
field :email, :string
field :tags, {:array, :string}
field :opens, :integer
field :opens_ip, :string # equal to nested 'ip' value in 'open_details'
field :opens_location, :string # equal to nested 'location' value in 'open_details'
field :clicks, :integer
field :clicks_ip, :string # equal to nested 'ip' value in 'click_details'
field :clicks_location, :string # equal to nested 'location' value in 'click_details'
field :status, :string # equal to the "state" in the payload
timestamps()
end
...
This is what I tried:
in web/controller/message_controller.ex:
def create(conn, payload) do
%{ payload |
"uniq_id" => payload["_id"],
"status" => payload["type"]
"open_ips" => Enum.at(payload["opens_detail"], 0)['ip'],
"open_location" => Enum.at(payload["opens_detail"], 0)['location'],
"click_ips" => Enum.at(payload["clicks_detail"], 0)['ip'],
"click_location" => Enum.at(payload["clicks_detail"], 0)['location'],
}
changeset = Message.changeset(%Message{}, payload)
...
end
but it quickly became clear that it wouldn't work also because I would still need to remove some keys.
I'm coming from Ruby/Python (Rails/Django) and don't want to start polluting my learning of functional programming, specifically elixir/phoenix, with my OO knowledge.
How would you solve this problem?
How would you solve this problem?
I would create a new map from scratch instead of updating the original map. You can use get_in to simplify the logic to access nested fields. Here's an example:
map = %{
uniq_id: get_in(payload, ["_id"]),
open_ips: get_in(payload, ["opens_detail", Access.at(0), "ip"]),
open_locations: get_in(payload, ["opens_detail", Access.at(0), "location"]),
}
If you want to pick a subset of fields from the original map, you can use Map.merge and Map.take:
Map.merge(Map.take(payload, [:sender, ...]), %{uniq_id: ...})
But if it's only a couple of fields I'd rather write them out manually.
Related
I have 2 models, say, User and UserProfile. User has one UserProfile. The UserProfile has a json column named 'details' with 'age' as one of the keys. I need a filter in the User index page with filtering and sorting available for 'age'. I am using ActiveRecord.
User
has_one :user_profile
#id: integer
#email: string
UserProfile
belongs_to :user
#user_id: integer
#details: json
ActiveAdmin.register User do
filter :id, as: :numeric
filter :email, as: :string
filter :age, as: :numeric
...
end
Thanks.
I have searched through all the active model serializer (v 0.9.0) documentation and SO questions I can find, but can't figure this out.
I have objects which can be marked as "published" or "draft". When they aren't published, only the user who created the object should be able to see it. I can obviously set permissions for "show" in my controller, but I also want to remove these objects from the json my "index" action returns unless it is the correct user. Is there a way to remove this object from the json returned by the serializer completely?
In my activemodel serializer, I am able to user filter(keys) and overloaded attributes to remove the data, as shown using my code below, but I can't just delete the entire object (I'm left having to return an empty {} in my json, trying to return nil breaks the serializer).
I'm probably missing something simple. Any help would be much appreciated!
class CompleteExampleSerializer < ExampleSerializer
attributes :id, :title
has_many :children
def attributes
data = super
(object.published? || object.user == scope || scope.admin?) ? data : {}
end
def filter(keys)
keys = super
(object.published? || object.user == scope || scope.admin?) ? keys : {}
end
end
That looks correct, try returning an array instead of a hash when you dont want any keys. Also, I don't think calling super is necessary b/c the filter takes in the keys.
Also, I don't think defining an attributes method is necessary.
I have chapters that can either be published or unpublished. They're owned by a story so I ended doing something like below.
has_many :unpublished_chapters, -> { where published: false }, :class_name => "Chapter", dependent: :destroy
has_many :published_chapters, -> { where published: true }, :class_name => "Chapter", dependent: :destroy
Inside of my serializer, I choose to include unpublished_chapters only if the current_user is the owner of those chapters. In ams 0.8.0 the syntax is like so.
def include_associations!
include! :published_chapters if ::Authorization::Story.include_published_chapters?(current_user,object,#options)
include! :unpublished_chapters if ::Authorization::Story.include_unpublished_chapters?(current_user,object,#options)
end
In my case, it's not so bad to differentiate the two and it saves me the trouble of dealing with it on the client. Our situations are similar but say you want to get all of the chapters by visiting the chapters index route. This doesn't make much sense in my app but you could go to that controller and render a query on that table.
I'm doing some test with Sinatra v1.4.4 and Active Record v4.0.2. I've created a DBase and a table named Company with Mysql Workbench. In table Company there are two fields lat & long of DECIMAL(10,8) and DECIMAL(11,8) type respectively. Without using migrations I defined the Company model as follow:
class Company < ActiveRecord::Base
end
Everything works except the fact that lat and lng are served as string and not as float/decimal. Is there any way to define the type in the above Class Company definition. Here you can find the Sinatra route serving the JSON response:
get '/companies/:companyId' do |companyId|
begin
gotCompany = Company.find(companyId)
[200, {'Content-Type' => 'application/json'}, [{code:200, company: gotCompany.attributes, message: t.company.found}.to_json]]
rescue
[404, {'Content-Type' => 'application/json'}, [{code:404, message:t.company.not_found}.to_json]]
end
end
Active Record correctly recognize them as decimal. For example, executing this code:
Company.columns.each {|c| puts c.type}
Maybe its the Active Record object attributes method typecast?
Thanks,
Luca
You can wrap the getter methods for those attributes and cast them:
class Company < ActiveRecord::Base
def lat
read_attribute(:lat).to_f
end
def lng
read_attribute(:lng).to_f
end
end
That will convert them to floats, e.g:
"1.61803399".to_f
=> 1.61803399
Edit:
Want a more declarative way? Just extend ActiveRecord::Base:
# config/initializers/ar_type_casting.rb
class ActiveRecord::Base
def self.cast_attribute(attribute, type_cast)
define_method attribute do
val = read_attribute(attribute)
val.respond_to?(type_cast) ? val.send(type_cast) : val
end
end
end
Then use it like this:
class Company < ActiveRecord::Base
cast_attribute :lat, :to_f
cast_attribute :lng, :to_f
end
Now when you call those methods on an instance they will be type casted to_f.
Following diego.greyrobot reply I modified my Company class with an additional method. It overrides the attributes method and afterwards typecast the needed fields. Yet something more declarative would be desirable imho.
class Company < ActiveRecord::Base
def attributes
retHash = super
retHash['lat'] = self.lat.to_f
retHash['lng'] = self.lng.to_f
retHash
end
end
I have a Jruby on Rails application with Neo4j.rb and a model, let's say Auth, defined like this:
class Auth < Neo4j::Rails::Model
property :uid, :type => String, :index => :exact
property :provider, :type => String, :index => :exact
property :email, :type => String, :index => :exact
end
And this code:
a = Auth.find :uid => 324, :provider => 'twitter'
# a now represents a node
a.to_json
# outputs: {"auth":{"uid": "324", "provider": "twitter", "email": "email#example.com"}}
Notice that the ID of the node is missing from the JSON representation. I have a RESTful API within my application and I need the id to perform DELETE and UPDATE actions.
I tried this to see if it works:
a.to_json :only => [:id]
But it returns an empty JSON {}.
Is there any way I can get the ID of the node in the JSON representation without rewriting the whole to_json method?
Update The same problems applies also to the to_xml method.
Thank you!
I am answering my own question. I still think that there is a better way to do this, but, for now, I am using the following hack:
In /config/initializers/neo4j_json_hack.rb I put the following code:
class Neo4j::Rails::Model
def as_json(options={})
repr = super options
repr.merge! '_nodeId' => self.id if self.persisted?
end
end
And now every JSON representations of my persisted Neo4j::Rails::Model objects have a _nodeId parameter.
The ID is typically not included because it shouldn't be exposed outside the Neo4j database. Neo4j doesn't guarantee that the ID will be identical from instance to instance, and it wouldn't surprise me if the ID changed in a distributed, enterprise installation of Neo4j.
You should create your own ID (GUID?), save it as a property on the node, index it, and use that to reference your nodes. Don't expose the Neo4j ID to your users or application, and definitely don't rely on it beyond a single request (e.g. don't save a reference to it in another database or use it to test for equality).
My code is:
# require gems
require 'rubygems'
require 'active_record'
# Start connetion to the server
ActiveRecord::Base.establish_connection(
:adapter => "mysql",
:host => "localhost",
:user => "root",
:password => "password",
:database => "data"
)
# Create the table
ActiveRecord::Schema.define do
create_table :datatable do |table|
table.column :date, :string
table.column :word, :string
table.column :website, :string
end
end
class Table < ActiveRecord::Base; end
Table.create(:date => datenow, :word => someword, :website => mywebsite)
puts "code completed"
And I get an error when the code wants to write to the table saying:
path/to/mysql_adapter.rb:287:in 'query': Table 'test.tables' doesn't exist (Mysql::Error)
If I create a table that is called tables within the database (data) then all of my data is put into there. I want to it to be written to the table (datatable) I have just created. How can I do this and solve the error?
The error is to be expected. You're accessing a table called table, but creating a table called datatable.
To configure your Table model to use the datatable table, use ActiveRecord::Base.set_table_name like so:
class Table < ActiveRecord::Base
set_table_name 'datatable'
end
Alternatively, you could rename your model to Datatable.
However, I'd suggest you rename it to something entirely different. Unless you're actually storing data about tables, your model probably shouldn't be called Table. WebsiteWord comes to mind or WordOccurrence or perhaps just Word. I have a feeling it'll save you pain in the long run.