I want to get the AST from a local function that is passed as an argument like doSomething(fn a -> a * 10 end)
So I tried
def test do
inspectAST(fn a,b -> a + b + 42 end)
inspectAST(quote do: fn a,b -> a + b + 42 end)
inspectFunctionInfo(fn a,b -> a + b + 42 end)
:ok
end
def inspectAST(this_is_AST) do
IO.inspect(this_is_AST)
IO.inspect("--------------------------------")
end
def inspectFunctionInfo(fun) do
IO.inspect(Function.info(fun))
IO.inspect("--------------------------------")
end
Results :
iex(3)> Utils.test
#Function<0.30424669/2 in Utils.test/0>
"--------------------------------"
{:fn, [],
[
{:->, [],
[
[{:a, [], Utils}, {:b, [], Utils}],
{:+, [context: Utils, import: Kernel],
[
{:+, [context: Utils, import: Kernel],
[{:a, [], Utils}, {:b, [], Utils}]},
42
]}
]}
]}
"--------------------------------"
[
pid: #PID<0.485.0>,
module: Utils,
new_index: 1,
new_uniq: <<58, 7, 203, 172, 99, 108, 54, 80, 24, 151, 75, 56, 73, 174, 138,
177>>,
index: 1,
uniq: 30424669,
name: :"-test/0-fun-1-",
arity: 2,
env: [],
type: :local
]
"--------------------------------"
What I want is the result from inspectAST(quote do: fn a,b -> a + b + 42 end) ( the AST ) but I would like to send the function like inspectAST(fn a,b -> a + b + 42 end) without the quote do:
If anyone has some idea about this, your help would be really appreciated :)
If you want a "function" to be called on the AST, not values, it should be a macro, not a function.
The following should be a good base for what you want to achieve:
defmacro inspectAST(ast) do
quote do
IO.inspect(unquote(Macro.escape(ast)))
:ok
end
end
def test do
inspectAST(fn a,b -> a + b + 42 end)
end
you need to use defmacro to treat your parameter as AST, not a regular value
we want IO.inspect to happen at runtime, not compile time, so we need to return the AST for it using quote
what we want to inspect is the content of the ast variable, so we are using unquote
but unquote will unescape ast as well, so we need to escape it once more using Macro.escape/2
Please note that the macro should be defined before calling it, so defmacro has to be before test, not after.
For more advanced explanations about macros, you should check this serie of articles which is very detailed.
Related
Where I'm at
For this example, consider Friends.repo
Table Person has fields :id, :name, :age
Example Ecto query:
iex> from(x in Friends.Person, where: {x.id, x.age} in [{1,10}, {2, 20}, {1, 30}], select: [:name])
When I run this, I get relevant results. Something like:
[
%{name: "abc"},
%{name: "xyz"}
]
But when I try to interpolate the query it throws the error
iex> list = [{1,10}, {2, 20}, {1, 30}]
iex> from(x in Friends.Person, where: {x.id, x.age} in ^list, select: [:name])
** (Ecto.Query.CompileError) Tuples can only be used in comparisons with literal tuples of the same size
I'm assuming I need to do some sort of type casting on the list variable. It is mentioned in the docs here : "When interpolating values, you may want to explicitly tell Ecto what is the expected type of the value being interpolated"
What I need
How do I achieve this for a complex type like this? How do I type cast for a "list of tuples, each of size 2"? Something like [{:integer, :integer}] doesn't seem to work.
If not the above, any alternatives for running a WHERE (col1, col2) in ((val1, val2), (val3, val4), ...) type of query using Ecto Query?
Unfortunately, the error should be treated as it is stated in the error message: only literal tuples are supported.
I was unable to come up with some more elegant and less fragile solution, but we always have a sledgehammer as the last resort. The idea would be to generate and execute the raw query.
list = [{1,10}, {2, 20}, {1, 30}]
#⇒ [{1, 10}, {2, 20}, {1, 30}]
values =
Enum.join(for({id, age} <- list, do: "(#{id}, #{age})"), ", ")
#⇒ "(1, 10), (2, 20), (1, 30)"
Repo.query(~s"""
SELECT name FROM persons
JOIN (VALUES #{values}) AS j(v_id, v_age)
ON id = v_id AND age = v_age
""")
The above should return the {:ok, %Postgrex.Result{}} tuple on success.
You can do it with a separate array for each field and unnest, which zips the arrays into rows with a column for each array:
ids =[ 1, 2, 1]
ages=[10, 20, 30]
from x in Friends.Person,
inner_join: j in fragment("SELECT distinct * from unnest(?::int[],?::int[]) AS j(id,age)", ^ids, ^ages),
on: x.id==j.id and x.age==j.age,
select: [:name]
another way of doing it is using json:
list = [%{id: 1, age: 10},
%{id: 2, age: 20},
%{id: 1, age: 30}]
from x in Friends.Person,
inner_join: j in fragment("SELECT distinct * from jsonb_to_recordset(?) AS j(id int,age int)", ^list),
on: x.id==j.id and x.age==j.age,
select: [:name]
Update: I now saw the tag mysql, the above was written for postgres, but maybe it can be used as a base for a mySql version.
I'm getting this string as query result from my database:
"%Sample.Struct{list: [], total: \"0.00\", day: 6, id: \"8vfts6\"}"
Is there any way to convert this one back to map?
I'm getting this error decoding it with poison
** (Poison.SyntaxError) Unexpected token: %
(poison) lib/poison/parser.ex:56: Poison.Parser.parse!/2
(poison) lib/poison.ex:83: Poison.decode!/2
I can't fix the way data is being added to database, i must find a proper way for a key/value route to easily retrive data from that. (this is just a sample for a more complex result)
As it was mentioned in comments, you should not use Code.eval_string. But, there is a way to safely convert your code to Elixir struct, using Code module:
ex(1)> encoded = "%Sample.Struct{list: [], total: \"0.00\", day: 6, id: \"8vfts6\"}"
"%Sample.Struct{list: [], total: \"0.00\", day: 6, id: \"8vfts6\"}"
First, get the AST from the string, but use the pattern matching to ensure it is a struct you are looking for ({:__aliases__, _, [:Sample, :Struct]}). All other (potentially malicious) code will fail this match:
iex(2)> {:ok, {:%, _, [{:__aliases__, _, [:Sample, :Struct]}, {:%{}, _, keymap}]} = ast} = Code.string_to_quoted(encoded)
{:ok,
{:%, [line: 1],
[{:__aliases__, [line: 1], [:Sample, :Struct]},
{:%{}, [line: 1], [list: [], total: "0.00", day: 6, id: "8vfts6"]}]}}
Here you have the full ast for you struct, and the keymap. You may now be tempted to use eval_quoted with the AST, to get the struct you needed:
iex(3)> {struct, _} = Code.eval_quoted(ast)
{%Sample.Struct{day: 6, id: "8vfts6", list: [], total: "0.00"}, []}
iex(4)> struct
%Sample.Struct{day: 6, id: "8vfts6", list: [], total: "0.00"}
But it is still not safe! Someone may put a function causing side effect into the string, like "%Sample.Struct{list: IO.puts \"Something\"}", which will be executed during the evaluation. So you will need to check the keymap firsts, if it contain safe data.
Or you may just use keymap directly, without evaluating anyting:
iex(5)> struct(Sample.Struct, keymap)
%Sample.Struct{day: 6, id: "8vfts6", list: [], total: "0.00"}
I am having the following data on which I need to do apply aggregation function followed by groupby.
My data is as follows: data.csv
id,category,sub_category,count
0,x,sub1,10
1,x,sub2,20
2,x,sub2,10
3,y,sub3,30
4,y,sub3,5
5,y,sub4,15
6,z,sub5,20
Here I'm trying to get the count by sub-category wise. After that I need to store the result in JSON format. The following piece of code helps me in achieving that. test.py
import pandas as pd
df = pd.read_csv('data.csv')
sub_category_total = df['count'].groupby([df['category'], df['sub_category']]).sum()
print sub_category_total.reset_index().to_json(orient = "records")
The above code gives me the following format.
[{"category":"x","sub_category":"sub1","count":10},{"category":"x","sub_category":"sub2","count":30},{"category":"y","sub_category":"sub3","count":35},{"category":"y","sub_category":"sub4","count":15},{"category":"z","sub_category":"sub5","count":20}]
But, my desired format is as follows:
{
"x":[{
"sub_category":"sub1",
"count":10
},
{
"sub_category":"sub2",
"count":30}],
"y":[{
"sub_category":"sub3",
"count":35
},
{
"sub_category":"sub4",
"count":15}],
"z":[{
"sub_category":"sub5",
"count":20}]
}
By following the discussions # How to convert pandas DataFrame result to user defined json format, I replaced the last 2 lines of test.py with,
g = df.groupby('category')[["sub_category","count"]].apply(lambda x: x.to_dict(orient='records'))
print g.to_json()
It gives me the following output.
{"x":[{"count":10,"sub_category":"sub1"},{"count":20,"sub_category":"sub2"},{"count":10,"sub_category":"sub2"}],"y":[{"count":30,"sub_category":"sub3"},{"count":5,"sub_category":"sub3"},{"count":15,"sub_category":"sub4"}],"z":[{"count":20,"sub_category":"sub5"}]}
Though the above result is somewhat similar to my desired format, I couldn't perform any aggregation function over here as it throws error saying 'numpy.int64' object has no attribute 'to_dict'. Hence, I end up getting all of the rows in the data file.
Can somebody help me in achieving the above JSON format?
I think you can first aggregate with sum, parameter as_index=False was added to groupby, so output is Dataframe df1 and then use other solution:
df1 = (df.groupby(['category','sub_category'], as_index=False)['count'].sum())
print (df1)
category sub_category count
0 x sub1 10
1 x sub2 30
2 y sub3 35
3 y sub4 15
4 z sub5 20
g = df1.groupby('category')[["sub_category","count"]]
.apply(lambda x: x.to_dict(orient='records'))
print (g.to_json())
{
"x": [{
"sub_category": "sub1",
"count": 10
}, {
"sub_category": "sub2",
"count": 30
}],
"y": [{
"sub_category": "sub3",
"count": 35
}, {
"sub_category": "sub4",
"count": 15
}],
"z": [{
"sub_category": "sub5",
"count": 20
}]
}
I wasn't entirely sure what to search for, so if this question has been asked before, I apologize in advance:
I have two functions
R := Rref*mx(mx^(4/3) - C0)^(-1)
M := Mref*(mx + C1*mx^(-1)*((1 - C0*mx^(-4/3))^(-3) - 1))
where Rref, Mref, C0 and C1 are constants and mx is the variable. I need to plot R as a function of M. Surely there must be something available in Mathematica to do such a plot - I just can't seem to find it.
The comment is correct, in that what you have is a set of two "parametric equations". You would use the ParametricPlot command. However, the syntax of functions with parameters is sometimes tricky, so let me give you my best recommendation:
R[Rref_, C0_, C1_][mx_] = Rref*mx (mx^(4/3) - C0)^(-1);
M[Mref_, C0_, C1_][mx_] = Mref*(mx + C1*mx^(-1)*((1 - C0*mx^(-4/3))^(-3) - 1));
I like that notation better because you can do things like derivatives:
R[Rref,C0,C1]'[mx]
(* Output: -((4 mx^(4/3) Rref)/(3 (-C0 + mx^(4/3))^2)) + Rref/(-C0 + mx^(4/3)) *)
Then you just plot the functions parametrically:
ParametricPlot[
{R[0.6, 0.3, 0.25][mx], M[0.2, 0.3, 0.25][mx]},
{mx, -10, 10},
PlotRange -> {{-10, 10}, {-10, 10}}
]
You can box this up in a Manipulate command to play with the parameters:
Manipulate[
ParametricPlot[
{R[Rref, C0, C1][mx], M[Mref, C0, C1][mx]},
{mx, -mmax, mmax},
PlotRange -> {{-10, 10}, {-10, 10}}
],
{Rref, 0, 1},
{Mref, 0, 1},
{C0, 0, 1},
{C1, 0, 1},
{mmax, 1, 10}
]
That should do it, I think.
I am using Postgres' json data type but want to do a query/ordering with data that is nested within the json.
I want to order or query with .where on the json data type. For example, I want to query for users that have a follower count > 500 or I want to order by follower or following count.
Thanks!
Example:
model User
data: {
"photos"=>[
{"type"=>"facebook", "type_id"=>"facebook", "type_name"=>"Facebook", "url"=>"facebook.com"}
],
"social_profiles"=>[
{"type"=>"vimeo", "type_id"=>"vimeo", "type_name"=>"Vimeo", "url"=>"http://vimeo.com/", "username"=>"v", "id"=>"1"},
{"bio"=>"I am not a person, but a series of plants", "followers"=>1500, "following"=>240, "type"=>"twitter", "type_id"=>"twitter", "type_name"=>"Twitter", "url"=>"http://www.twitter.com/", "username"=>"123", "id"=>"123"}
]
}
For any who stumbles upon this. I have come up with a list of queries using ActiveRecord and Postgres' JSON data type. Feel free to edit this to make it more clear.
Documentation to the JSON operators used below: https://www.postgresql.org/docs/current/functions-json.html.
# Sort based on the Hstore data:
Post.order("data->'hello' DESC")
=> #<ActiveRecord::Relation [
#<Post id: 4, data: {"hi"=>"23", "hello"=>"22"}>,
#<Post id: 3, data: {"hi"=>"13", "hello"=>"21"}>,
#<Post id: 2, data: {"hi"=>"3", "hello"=>"2"}>,
#<Post id: 1, data: {"hi"=>"2", "hello"=>"1"}>]>
# Where inside a JSON object:
Record.where("data ->> 'likelihood' = '0.89'")
# Example json object:
r.column_data
=> {"data1"=>[1, 2, 3],
"data2"=>"data2-3",
"array"=>[{"hello"=>1}, {"hi"=>2}],
"nest"=>{"nest1"=>"yes"}}
# Nested search:
Record.where("column_data -> 'nest' ->> 'nest1' = 'yes' ")
# Search within array:
Record.where("column_data #>> '{data1,1}' = '2' ")
# Search within a value that's an array:
Record.where("column_data #> '{array,0}' ->> 'hello' = '1' ")
# this only find for one element of the array.
# All elements:
Record.where("column_data ->> 'array' LIKE '%hello%' ") # bad
Record.where("column_data ->> 'array' LIKE ?", "%hello%") # good
According to this http://edgeguides.rubyonrails.org/active_record_postgresql.html#json
there's a difference in using -> and ->>:
# db/migrate/20131220144913_create_events.rb
create_table :events do |t|
t.json 'payload'
end
# app/models/event.rb
class Event < ActiveRecord::Base
end
# Usage
Event.create(payload: { kind: "user_renamed", change: ["jack", "john"]})
event = Event.first
event.payload # => {"kind"=>"user_renamed", "change"=>["jack", "john"]}
## Query based on JSON document
# The -> operator returns the original JSON type (which might be an object), whereas ->> returns text
Event.where("payload->>'kind' = ?", "user_renamed")
So you should try Record.where("data ->> 'status' = 200 ") or the operator that suits your query (http://www.postgresql.org/docs/current/static/functions-json.html).
Your question doesn't seem to correspond to the data you've shown, but if your table is named users and data is a field in that table with JSON like {count:123}, then the query
SELECT * WHERE data->'count' > 500 FROM users
will work. Take a look at your database schema to make sure you understand the layout and check that the query works before complicating it with Rails conventions.
JSON filtering in Rails
Event.create( payload: [{ "name": 'Jack', "age": 12 },
{ "name": 'John', "age": 13 },
{ "name": 'Dohn', "age": 24 }]
Event.where('payload #> ?', '[{"age": 12}]')
#You can also filter by name key
Event.where('payload #> ?', '[{"name": "John"}]')
#You can also filter by {"name":"Jack", "age":12}
Event.where('payload #> ?', {"name":"Jack", "age":12}.to_json)
You can find more about this here