This is the correct ordered array with MySQL:
[
[1330210800000, 1],
[1330297200000, 6],
[1330383600000, 10],
[1330470000000, 2],
[1330556400000, 5],
[1330815600000, 9],
[1331593200000, 2],
[1331852400000, 4],
[1331938800000, 8],
[1332111600000, 8],
[1332198000000, 4],
[1332284400000, 8],
[1332370800000, 3],
[1332630000000, 2]
]
But with PostgreSQL the array is:
[
[1330588800000, 5],
[1332399600000, 3],
[1330848000000, 9],
[1330416000000, 10],
[1331622000000, 2],
[1330329600000, 6],
[1330502400000, 2],
[1332140400000, 8],
[1332313200000, 8],
[1330243200000, 1],
[1332226800000, 4],
[1331967600000, 8],
[1332658800000, 2],
[1331881200000, 4]
]
The postgreSQL is the order wrong and the dates different and the count of kliks:
This is the query in my controller:
#kliks = Klik.count( :group => "DATE( created_at )" )
.map{|k, v| [(Time.parse(k).to_i * 1000), v] }
You haven't specified any particular order in your query so the database is free to return your results in any order it wants. Apparently MySQL is ordering the results as a side effect of its GROUP BY but PostgreSQL won't necessarily do that. So your first "bug" is just an incorrect assumption on your part. If you want the database to do the sorting then you want something like:
Klik.count(:group => 'date(created_at)', :order => :date_created_at)
If you throw out the * 1000 and sort the integer timestamps:
1330210800, 1, MySQL
1330243200, 1, PostgreSQL
1330297200, 6, MySQL
1330329600, 6, PostgreSQL
1330383600, 10, MySQL
1330416000, 10, PostreSQL
...
You'll see that they do actually line up quite nicely and the integer timestamps differ by 32400s (AKA 9 hours) or 28800s (AKA 8 hours or 9 hours with a DST adjustment) in each MySQL/PostgreSQL pair. Presumably you're including a time zone (with DST) in one of your conversions while the other is left in UTC.
You are really missing the order clause. By default, database servers return groups in "random" order. The rule is: when you need to fix the order, always use ORDER BY (in rails its :order).
Related
I am trying to understand a conceptual approach to integrating data into a stack of observation frames that don't have the same dimensionality as the frames.
Example Frame: [1, 2, 3]
Example extra data: [a, b]
Currently, I am approaching this as follows, with the example of 3 frames (rows) representing temporal observation data over 3 time periods, and a 4th frame (row) representing non-temporal data for which only the most recent observed values are needed.
Example:
[
[1, 2, 3],
[4, 5, 6],
[7, 8, 9],
[a, b, NaN]
]
The a and b are the added data and the NaN is just a value added to match the dimensions of the existing data. Would there be differences (all inputs welcomed) in using NaN vs an outlier value like -1 that would never be observed by other measures?
One possible alternative would be to structure the observation data as such:
[
[1, 2, 3, a, b],
[4, 5, 6, a-1, b-1],
[7, 8, 9, a-2, b-3],
]
It seems this would be a noticeable increase in resources and the measures (in my context) of a and b can be universally understood as "bigger is better" or "smaller is better" without context from the other data values.
Develop a Python method change(amount) that for any integer amount in the range from 24 to 1000 returns a list consisting of numbers 5 and 7 only, such that their sum is equal to amount. For example, change(28) may return [7, 7, 7, 7], while change(49) may return [7, 7, 7, 7, 7, 7, 7] or [5, 5, 5, 5, 5, 5, 5, 7, 7] or [7, 5, 5, 5, 5, 5, 5, 5, 7].
To solve this quiz, implement the method change(amount) on your machine, test it on several inputs, and then paste your code in the field below and press the submit quiz button. Your submission should contain the change method only (in particular, make sure to remove all print statements).
Just started programming, quite proud of this. Here you go:
To use: print(change(amount))
def change(amount):
if amount < 24 or amount > 1000:
return 'error'
array = []
while True:
if (amount/5).is_integer():
for i in range(int(amount/5)):
array.append(5)
return array
array.append(7)
amount += -7
while amount > 0:
break
Where I'm at
For this example, consider Friends.repo
Table Person has fields :id, :name, :age
Example Ecto query:
iex> from(x in Friends.Person, where: {x.id, x.age} in [{1,10}, {2, 20}, {1, 30}], select: [:name])
When I run this, I get relevant results. Something like:
[
%{name: "abc"},
%{name: "xyz"}
]
But when I try to interpolate the query it throws the error
iex> list = [{1,10}, {2, 20}, {1, 30}]
iex> from(x in Friends.Person, where: {x.id, x.age} in ^list, select: [:name])
** (Ecto.Query.CompileError) Tuples can only be used in comparisons with literal tuples of the same size
I'm assuming I need to do some sort of type casting on the list variable. It is mentioned in the docs here : "When interpolating values, you may want to explicitly tell Ecto what is the expected type of the value being interpolated"
What I need
How do I achieve this for a complex type like this? How do I type cast for a "list of tuples, each of size 2"? Something like [{:integer, :integer}] doesn't seem to work.
If not the above, any alternatives for running a WHERE (col1, col2) in ((val1, val2), (val3, val4), ...) type of query using Ecto Query?
Unfortunately, the error should be treated as it is stated in the error message: only literal tuples are supported.
I was unable to come up with some more elegant and less fragile solution, but we always have a sledgehammer as the last resort. The idea would be to generate and execute the raw query.
list = [{1,10}, {2, 20}, {1, 30}]
#⇒ [{1, 10}, {2, 20}, {1, 30}]
values =
Enum.join(for({id, age} <- list, do: "(#{id}, #{age})"), ", ")
#⇒ "(1, 10), (2, 20), (1, 30)"
Repo.query(~s"""
SELECT name FROM persons
JOIN (VALUES #{values}) AS j(v_id, v_age)
ON id = v_id AND age = v_age
""")
The above should return the {:ok, %Postgrex.Result{}} tuple on success.
You can do it with a separate array for each field and unnest, which zips the arrays into rows with a column for each array:
ids =[ 1, 2, 1]
ages=[10, 20, 30]
from x in Friends.Person,
inner_join: j in fragment("SELECT distinct * from unnest(?::int[],?::int[]) AS j(id,age)", ^ids, ^ages),
on: x.id==j.id and x.age==j.age,
select: [:name]
another way of doing it is using json:
list = [%{id: 1, age: 10},
%{id: 2, age: 20},
%{id: 1, age: 30}]
from x in Friends.Person,
inner_join: j in fragment("SELECT distinct * from jsonb_to_recordset(?) AS j(id int,age int)", ^list),
on: x.id==j.id and x.age==j.age,
select: [:name]
Update: I now saw the tag mysql, the above was written for postgres, but maybe it can be used as a base for a mySql version.
I am new to Deep Learning, I think I've got the point of this Understanding NumPy's Convolve
.
I tried this in numpy
np.convolve([3, 4], [1, 1, 5, 5], 'valid')
the output is
array([ 7, 19, 35])
According to the link the second element of the output should be 23.
[3 4]
[1 1 5 5]
= 3 * 1 + 4 * 5 = 23
It seems that the second element (19) is wrong in my case, though I have no idea how and why. Any responses will be grateful.
I think you are confused with convolution implementation in neural networks, which is actually cross-corellation. However if you refer to mathematical definition of the convolution, you will see that the the second function has to be time-reversed (or mirrored). Also, note that numpy swaps araguments if the second element has bigger size (as in your case). So the result your get is obtained as following:
[1*4+3*1,1*4+3*5,5*4+3*5]
In case you want numpy to perform calculations as you did, you should use:
np.correlate([3, 4], [1, 1, 5, 5], 'valid')
Here is useful illustration for convolution and cross-correlation:
the reason is that numpy reverse the shorter array, here [3, 4] becomes [4,3]. It is done because of the definition of the convolution (you can find more informations in the section definition of wikipedia here https://en.wikipedia.org/wiki/Convolution).
So in fact : np.convolve([3, 4], [1, 1, 5, 5], 'valid')
makes :
[4 3]
[1 1 5 5]
= 4 * 1 + 3 * 5 = 19
:)
Using Ruby 1.9.
I have a array [1,2,3]
I need to convert it to a format ('1', '2', '3') in order to apply it inside SQL queries (IN Statements) and the database is MySQL. Please suggest some good solution.
Thanks :)
Looking at the comments above not sure you still want to do this, but just for fun:
"('#{ [1,2,3].map(&:to_s).join("\',\'") }')"
#=> "('1','2','3')"
UPDATE: Based on comments from #tadman
assuming a SQL implementation here is some pseudo code:
irb(main):003:0> array = [1,2,3,4]
=> [1, 2, 3, 4]
irb(main):004:0> array.map{|id| "$#{id}"}.join(",")
=> "$1,$2,$3,$4"
irb(main):011:0> ["SELECT * FROM table WHERE id IN (#{array.map{|id| "$#{id}" }.join(',')})", array]
=> ["SELECT * FROM table WHERE id IN ($1,$2,$3,$4)", [1, 2, 3, 4]]