BigQuery has facilities to parse JSON in real-time interactive queries: Just store the JSON encoded object as a string, and query in real time, with functions like JSON_EXTRACT_SCALAR.
However, I can't find a way to discover all the keys (properties) in these objects.
Can I use a UDF for this?
Here's something that uses Standard SQL:
CREATE TEMP FUNCTION jsonObjectKeys(input STRING)
RETURNS Array<String>
LANGUAGE js AS """
return Object.keys(JSON.parse(input));
""";
WITH keys AS (
SELECT
jsonObjectKeys(myColumn) AS keys
FROM
myProject.myTable
WHERE myColumn IS NOT NULL
)
SELECT
DISTINCT k
FROM keys
CROSS JOIN UNNEST(keys.keys) AS k
ORDER BY k
Below version fixes some "issues" in original answer like:
1. only first level of keys was emitted
2. having to manually comppile and than run final query for extracting info based on discovered keys
SELECT type, key, value, COUNT(1) AS weight
FROM JS(
(SELECT json, type
FROM [fh-bigquery:openlibrary.ol_dump_20151231#0]
WHERE type = '/type/edition'
),
json, type, // Input columns
"[{name: 'type', type:'string'}, // Output schema
{name: 'key', type:'string'},
{name: 'value', type:'string'}]",
"function(r, emit) { // The function
x = JSON.parse(r.json);
processKey(x, '');
function processKey(node, parent) {
if (parent !== '') {parent += '.'};
Object.keys(node).map(function(key) {
value = node[key].toString();
if (value !== '[object Object]') {
emit({type:r.type, key:parent + key, value:value});
} else {
processKey(node[key], parent + key);
};
});
};
}"
)
GROUP EACH BY type, key, value
ORDER BY weight DESC
LIMIT 1000
The result is as below
Row type key value weight
1 /type/edition type.key /type/edition 25140209
2 /type/edition last_modified.type /type/datetime 25140209
3 /type/edition created.type /type/datetime 17092292
4 /type/edition languages.0.key /languages/eng 14514830
5 /type/edition notes.type /type/text 11681480
6 /type/edition revision 2 8714084
7 /type/edition latest_revision 2 8704217
8 /type/edition revision 3 5041680
9 /type/edition latest_revision 3 5040634
10 /type/edition created.value 2008-04-01T03:28:50.625462 3579095
11 /type/edition revision 1 3396868
12 /type/edition physical_format Paperback 3181270
13 /type/edition revision 4 3053266
14 /type/edition latest_revision 4 3053197
15 /type/edition revision 5 2076094
16 /type/edition latest_revision 5 2076072
17 /type/edition publish_country nyu 1727347
18 /type/edition created.value 2008-04-30T09:38:13.731961 1681227
19 /type/edition publish_country enk 1627969
20 /type/edition publish_places London 1613755
21 /type/edition physical_format Hardcover 1495864
22 /type/edition publish_places New York 1467779
23 /type/edition revision 6 1437467
24 /type/edition latest_revision 6 1437463
25 /type/edition publish_country xxk 1407624
The answers above don't work well in the current (2021) version, fail if either the JSON field is null or the JSON has null entries, don't aggregate well (we're trying to get structure, not content), or the like.
So, here's an improved version based on Felipe Hoffa's answer.
It's fully recursive; checks for null and Array types; suppresses array indices (as []); flagged deterministic so it'll get cached; and groups, sorts, & counts the results.
Sample output:
key type n
"" null 213
avatar string 1046
blinking boolean 1046
created_at string 1046
deprecated_fields Array 1046
display_name string 1046
fields Array 1046
fields.[] Object 31
fields.[].name string 31
fields.[].value string 31
fields.[].verified_at null 27
fields.[].verified_at string 4
friends_count number 1046
Note:
the empty string key means that the field itself is actually null
the deprecated_fields key is one where all examples in the JSON are ..., deprecated_fields: [], ...
null is returned as the string "null", like other types (not SQL null)
It could be improved to detect different types of number (int, bigint, float, decimal), dates, numbers stored as strings, or the like. But eh, this was good enough for my purposes, and that'd require more processing.
Just change the your-* bits in the last couple lines:
CREATE TEMP FUNCTION jsonParsed(input STRING)
RETURNS Array<Struct<key STRING, type STRING>>
DETERMINISTIC LANGUAGE js AS
"""
function processKey(node, parent) {
var ary = [];
if (parent !== '') {
parent += '.';
}
if (node == null) {
ary.push({
key: parent,
type: 'null'
})
} else {
Object.keys(node).map(function(key) {
var v = node[key];
if (node.constructor.name == "Array") {
keytouse = '[]'
} else {
keytouse = key
}
if ((v == null) || (typeof(v) !== 'object')) {
if (v == null) { typetouse = 'null';} else {typetouse = typeof(v);}
ary.push({
key: parent + keytouse,
type: typetouse
});
} else {
ary.push({
key: parent + keytouse,
type: v.constructor.name
});
ary = [].concat(ary, processKey(v, parent + keytouse));
}
});
}
return ary;
}
return processKey(JSON.parse(input), '');
""";
with keys as (SELECT jsonParsed(your-json-field) as keys FROM `your-project-id.your-database-id.your-table-id`)
select key, type, count(*) as n from keys k cross join unnest(k.keys) as kk group by key, type order by key asc;
How to extract all of a JSON object keys using a JavaScript UDF in BigQuery:
SELECT type, key
FROM (
SELECT * FROM
js(
(SELECT json, type FROM [fh-bigquery:openlibrary.ol_dump_20151231]
),
// Input columns.
json, type,
// Output schema.
"[{name: 'key', type:'string'},
{name: 'type', type:'string'}]",
// The function.
"function(r, emit) {
x=JSON.parse(r.json)
Object.keys(x).forEach(function(entry) {
emit({key:entry, type:r.type,});
});
}"
)
)
LIMIT 100
Grouped and counted:
Once you've found all the keys you can use, then you can use JSON_EXTRACT_SCALAR on a normal SQL query:
Now that you know the keys, you can extract all information known for a type:
SELECT JSON_EXTRACT_SCALAR(json, '$.key') key,
JSON_EXTRACT_SCALAR(json, '$.type.key') type,
JSON_EXTRACT(json, '$.revision') revision,
JSON_EXTRACT_SCALAR(json, '$.last_modified.value') last_modified,
JSON_EXTRACT_SCALAR(json, '$.title') title,
JSON_EXTRACT_SCALAR(json, '$.publish_date') publish_date,
JSON_EXTRACT(json, '$.publishers') publishers,
JSON_EXTRACT(json, '$.latest_revision') latest_revision,
JSON_EXTRACT(json, '$.languages') languages,
JSON_EXTRACT(json, '$.authors') authors,
JSON_EXTRACT(json, '$.works') works,
JSON_EXTRACT(json, '$.number_of_pages') number_of_pages,
JSON_EXTRACT(json, '$.publish_places') publish_places,
JSON_EXTRACT(json, '$.publish_country') publish_country,
JSON_EXTRACT(json, '$.subjects') subjects,
JSON_EXTRACT_SCALAR(json, '$.created.value') created,
JSON_EXTRACT_SCALAR(json, '$.pagination') pagination,
JSON_EXTRACT_SCALAR(json, '$.by_statement') by_statement,
JSON_EXTRACT(json, '$.isbn_10') isbn_10,
JSON_EXTRACT_SCALAR(json, '$.isbn_10[0]') isbn_10_0,
JSON_EXTRACT(json, '$.notes') notes,
JSON_EXTRACT(json, '$.lc_classifications') lc_classifications,
JSON_EXTRACT_SCALAR(json, '$.subtitle') subtitle,
JSON_EXTRACT(json, '$.lccn') lccn,
JSON_EXTRACT(json, '$.identifiers') identifiers,
JSON_EXTRACT(json, '$.contributions') contributions,
JSON_EXTRACT(json, '$.isbn_13') isbn_13,
JSON_EXTRACT_SCALAR(json, '$.isbn_13[0]') isbn_13_0,
JSON_EXTRACT(json, '$.physical_format') physical_format,
JSON_EXTRACT(json, '$.oclc_numbers') oclc_numbers,
JSON_EXTRACT(json, '$.series') series,
JSON_EXTRACT(json, '$.source_records') source_records,
JSON_EXTRACT(json, '$.covers') covers,
JSON_EXTRACT(json, '$.dewey_decimal_class') dewey_decimal_class,
JSON_EXTRACT_SCALAR(json, '$.edition_name') edition_name,
# ...
FROM [fh-bigquery:openlibrary.ol_dump_20151231]
WHERE type='/type/edition'
LIMIT 10
(sample data taken from an Open Library data dump https://openlibrary.org/developers/dumps, based on a reddit conversation)
This is what I came up with (Specifically for StandardSQL).. Not sure if accumulating in a list is the best method... Also.. I simplified for my case where I'm just concerned with keys.
CREATE TEMPORARY FUNCTION Foo(infoo STRING)
RETURNS Array<String>
LANGUAGE js AS """
blah = [];
function processKey(node, parent) {
if (parent !== '') {parent += '.'};
Object.keys(node).forEach(function(key) {
value = node[key].toString();
if (value !== '[object Object]') {
blah.push(parent+key)
} else {
processKey(node[key], parent + key);
};
});
};
try {
x = JSON.parse(infoo);
processKey(x,'');
return blah;
} catch (e) { return null }
"""
OPTIONS ();
WITH x as(
select Foo(jsonfield) as bbb from clickstream.clikcs
)
select distinct arr_item from (SELECT arr_item FROM x, UNNEST(bbb) as arr_item)
Related
I want to fetch data from mysql with seqlpro in R but when I run the query it takes ages.
here is my code :
old_value<- data.frame()
new_value<- data.frame()
counter<- 0
for (i in 1:length(short_list$id)) {
mydb = OpenConn(dbname = '**', user = '**', password = '**', host = '**')
query <- paste0("select * from table where id IN (",short_list$id[i],") and country IN ('",short_list$country[i],"') and date >= '2019-04-31' and `date` <= '2020-09-1';", sep = "" )
temp_old <- RMySQL::dbFetch(RMySQL::dbSendQuery(mydb, query), n = -1
query <- paste0("select * from table2 where id IN (",short_list$id[i],") and country IN ('",short_list$country[i],"') and date >= '2019-04-31' and `date` <= '2020-09-1';", sep = "" )
temp_new <- RMySQL::dbFetch(RMySQL::dbSendQuery(mydb, query), n = -1)
RMySQL::dbDisconnect(mydb)
new_value<- rbind(temp_new,new_value)
old_value<- rbind(temp_old,old_value)
counter=counter+1
base::print(paste("completed for ",counter),sep="")
}
is there any way that I can writ it more efficient and call the queries faster because i have around 5000 rows which should go into the loop. Actually this query works but it takes time.
I have tried this but still it gives me error :
#parralel computing
clust <- makeCluster(length(6))
clusterEvalQ(cl = clust, expr = lapply(c('data.table',"RMySQL","dplyr","plyr"), library, character.only = TRUE))
clusterExport(cl = clust, c('config','short_list'), envir = environment())
new_de <- parLapply(clust, short_list, function(id,country) {
for (i in 1:length(short_list$id)) {
mydb = OpenConn(dbname = '*', user = '*', password = '*', host = '**')
query <- paste0("select * from table1 where id IN (",short_list$id[i],") and country IN ('",short_list$country[i],"') and source_event_date >= date >= '2019-04-31' and `date` <= '2020-09-1';", sep = "" )
temp_data <- RMySQL::dbFetch(RMySQL::dbSendQuery(mydb, query), n = -1) %>% data.table::data.table()
RMySQL::dbDisconnect(mydb)
return(temp_data)}
})
stopCluster(clust)
gc(reset = T)
new_de <- data.table::rbindlist(new_de, use.names = TRUE)
I have also defined the list of short_list as following:
short_list<- as.list(short_list)
and inside short_list is:
id. country
2 US
3 UK
... ...
However it gives me this error:
Error in checkForRemoteErrors(val) :
one node produced an error: object 'i' not found
However when I remove i from the id[i] and country[i] it only give me the first row result not get all ids and country result.
I think an alternative is to upload the ids you need into a temporary table, and query for everything at once.
tmptable <- "mytemptable"
dbWriteTable(conn, tmptable, short_list, create = TRUE)
alldat <- dbGetQuery(conn, paste("
select t1.*
from ", tmptable, " tmp
left join table1 t1 on tmp.id=t1.id and tmp.country=t1.country
where t1.`date` >= '2019-04-31' and t1.`date` <= '2020-09-1'"))
dbExecute(conn, paste("drop table", tmptable))
(Many DBMSes use a leading # to indicate a temporary table that is only visible to the local user, is much less likely to clash in the schema namespace, and is automatically cleaned when the connection is closed. I generally encourage use of temp-tables here, check with your DB docs, schema, and/or DBA for more info here.)
The order of tables is important: by pulling all from mytemptable and then left join table1 onto it, we are effectively filtering out any data from table1 that does not include a matching id and country.
This doesn't solve the speed of data download, but some thoughts on that:
Each time you iterate through the queries, you have not-insignificant overhead; if there's a lot of data then this overhead should not be huge, but it's still there. Using a single query will reduce this overhead significantly.
Query time can also be affected by any index(ices) on the tables. Outside the scope of this discussion, but might be relevant if you have a large-ish table. If the table is not indexed efficiently (or the query is not structured well to use those indices), then each query will take a finite amount of time to "compile" and return data. Again, overhead that will be reduced with a single more-efficient query.
Large queries might benefit from using the command-line tool mysql; it is about as fast as you're going to get, and might iron over any issues in RMySQL and/or DBI. (I'm not saying they are inefficient, but ... it is unlikely that a free open-source driver will be faster than MySQL's own command-line utility.
As for doing this in parallel ...
You're using parLapply incorrectly. It accepts a single vector/list and iterates over each object in that list. You might use it iterating over the indices of a frame, but you cannot use it to iterate over multiple columns within that frame. This is exactly like base R's lapply.
Let's show what is going on when you do your call. I'll replace it with lapply (because debugging in multiple processes is difficult).
# parLapply(clust, mtcars, function(id, country) { ... })
lapply(mtcars, function(id, country) { browser(); 1; })
# Called from: FUN(X[[i]], ...)
debug at #1: [1] 1
id
# [1] 21.0 21.0 22.8 21.4 18.7 18.1 14.3 24.4 22.8 19.2 17.8 16.4 17.3 15.2 10.4 10.4 14.7 32.4 30.4 33.9 21.5 15.5 15.2
# [24] 13.3 19.2 27.3 26.0 30.4 15.8 19.7 15.0 21.4
country
# Error: argument "country" is missing, with no default
Because the argument (mtcars here, short_list in yours) is a data.frame, since it is a list-like object, lapply (and parLapply) operate on each column at a time. You were hoping that it would "unzip" the data, applying the first column's value to id and the second column's value to country. In fact, the is a function that does this: Map (and the parallel's clusterMap, as I suggested in my comment). More on that later.
The intent of parallelizing things is to not use the for loop inside the parallel function. If short_list has 10 rows, and if your use of parLapply were correct, then you would be querying all rows 10 times, making your problem significantly worse. In pseudo-code, you'd be doing:
parallelize for each row in short_list:
# this portion is run simultaneously in 10 difference processes/threads
for each row in short_list:
query for data related to this row
Two alternatives:
Provide a single argument to parLapply representing the rows of the frame.
new_de <- new_de <- parLapply(clust, seqlen(NROW(short_list)), function(rownum) {
mydb = OpenConn(dbname = '*', user = '*', password = '*', host = '**')
on.exit({ DBI::dbDisconnect(mydb) })
tryCatch(
DBI::dbGetQuery(mydb, "
select * from table1
where id=? and country=?
and source_event_date >= date >= '2019-04-31' and `date` <= '2020-09-1'",
params = list(short_list$id[rownum], short_list$country[rownum])),
error = function(e) e)
})
Use clusterMap for the same effect.
new_de <- clusterMap(clust, function(id, country) {
mydb = OpenConn(dbname = '*', user = '*', password = '*', host = '**')
on.exit({ DBI::dbDisconnect(mydb) })
tryCatch(
DBI::dbGetQuery(mydb, "
select * from table1
where id=? and country=?
and source_event_date >= date >= '2019-04-31' and `date` <= '2020-09-1'",
params = list(id, country)),
error = function(e) e)
}, short_list$id, short_list$country)
If you are not familiar with Map, it is like "zipping" together multiple vectors/lists. For example:
myfun1 <- function(i) paste(i, "alone")
lapply(1:3, myfun1)
### "unrolls" to look like
list(
myfun1(1),
myfun1(2),
myfun1(3)
)
myfun3 <- function(i,j,k) paste(i, j, k, sep = '-')
Map(f = myfun3, 1:3, 11:13, 21:23)
### "unrolls" to look like
list(
myfun3(1, 11, 21),
myfun3(2, 12, 22),
myfun3(3, 13, 23)
)
Some liberties I took in that adapted code:
I shifted from the dbSendQuery/dbFetch double-tap to a single call to dbGetQuery.
I'm using DBI functions, since DBI functions provide a superset of what each driver's package provides. (You're likely using some of it anyway, perhaps without realizing it.) You can switch back with no issue.
I added tryCatch, since sometimes errors can be difficult to deal with in parallel processes. This means you'll need to check the return value from each of your processes to see if either inherits(ret, "error") (problem) or is.data.frame (normal).
I used on.exit so that even if there's a problem, the connection closure should still occur.
I have a list of movies with ratings by user.
{"_id":59607,"title":"King Corn (2007)",
"genres":["Documentary"],
"ratings":[ {"userId":1860,"rating":3},
{"userId":9970,"rating":3.5},
{"userId":16929,"rating":1.5},
{"userId":23473,"rating":4},
{"userId":23733,"rating":4},
{"userId":27584,"rating":3},
{"userId":28232,"rating":4},
{"userId":29482,"rating":3},
{"userId":40976,"rating":5},
{"userId":44631,"rating":4},
{"userId":47613,"rating":3},
{"userId":49763,"rating":3},
{"userId":58160,"rating":4.5},
{"userId":62249,"rating":3},
{"userId":65923,"rating":4},
{"userId":67507,"rating":4},
{"userId":68259,"rating":3.5},
{"userId":70331,"rating":5},
{"userId":71420,"rating":3.5}
]
}
I need to count how many ratings are done by every user. This is my attempt to get in the ratings.
a = load '/movies_1m.json' using JsonLoader('id:int, title : chararray, genres : { ( genre : chararray ) }, ratings: { ( userId : int, rating: float) } ');
then
b = FOREACH a GENERATE FLATTEN(ratings);
describe give me following:
b: {ratings::userId: int,ratings::rating: float}
just to count the users I need to access the inside of ratings. But this is the point where it is not succeeding. I tried this:
c = FOREACH b GENERATE COUNT(ratings);
it gets me an error.
I need to get something like this:
{userId: int, rating: float}
You need to GROUP in order to COUNT since that is an aggregate operation.
b = FOREACH a GENERATE FLATTEN(ratings);
gr = GROUP b by ratings::userId;
c = FOREACH gr GENERATE group,COUNT($1);
\d c
Output
Note, none of the users in your example repeat, so these are all one.
(1860,1)
(9970,1)
(16929,1)
(23473,1)
(23733,1)
(27584,1)
(28232,1)
(29482,1)
(40976,1)
(44631,1)
(47613,1)
(49763,1)
(58160,1)
(62249,1)
(65923,1)
(67507,1)
(68259,1)
(70331,1)
(71420,1)
Currently I am using GO-GORM for all of my database queries (mostly CRUD) and I am having some issues inserting a generated UUID into a MySQL database column.
The column is a BINARY(16) as suggested in multiple blogs, the UUID is generated using github.com/satori/go.uuid package for Golang.
I am using GORM's BeforeCreate hook to generate the UUID if one does not already exist on the user, the code that I am using is as follows:
func (u *User) BeforeCreate(scope *gorm.Scope) (err error) {
if u.UserID == uuid.Nil {
uuid, err := uuid.NewV4().MarshalBinary()
scope.SetColumn("user_id", uuid)
}
}
I have also used len to get the length that MarshalBinary outputs and it returns as 16.
The error I get from GORM when trying to insert the UUID into MySQL is as follows:
(Error 1406: Data too long for column 'user_id' at row 1)
I have also fmt.Println(uuid) to see the results and they are also as follows (obviosuly changes as the UUID is generated every insert)
[93 132 59 55 102 96 72 35 137 185 34 21 195 88 213 127]
My MYSQL schema is as follows also:
CREATE TABLE users
(
id INT(10) unsigned PRIMARY KEY NOT NULL AUTO_INCREMENT,
created_at TIMESTAMP,
updated_at TIMESTAMP,
deleted_at TIMESTAMP,
user_id BINARY(16) NOT NULL,
username VARCHAR(255) NOT NULL,
password VARCHAR(255),
firstname VARCHAR(255),
lastname VARCHAR(255),
email VARCHAR(255),
address_id VARCHAR(255)
);
CREATE INDEX idx_users_deleted_at ON users (deleted_at);
CREATE UNIQUE INDEX username ON users (username);
CREATE UNIQUE INDEX user_id ON users (user_id);
I have tried different methods and libraries to generate UUIDs and convert them to binary to insert with similar results.
I think the problem is in the definition of model User. To save the GUID as 16-bytes binary, you need to define the UserID column as []byte not uuid.UUID.
type User struct {
//other fields ..
UserID []byte
//other fields ...
}
func (u *User) BeforeCreate(scope *gorm.Scope) (err error) {
if u.UserID == nil {
uuid, err := uuid.NewV4().MarshalBinary()
scope.SetColumn("user_id", uuid)
}
return nil
}
If you define the field as uuid.UUID, gorm "misinterpreted" the field as string and then insert that string into the database as binary. For example, the following UUID,
uuid: 16ac369b-e57f-471b-96f6-1068ead0bf98
//16-bytes equivalent
bytes: [22 172 54 155 229 127 71 27 150 246 16 104 234 208 191 152]
will be inserted to database as the ASCII codes of the UUID which are
0x31 0x36 0x61 0x63 0x33 0x36 0x39 0x62 0x2D 0x65 ...
('1' '6' 'a' 'c' '3' '6' '9' 'b' '-' 'e' ...)
which are 36-bytes in length, thus you're getting Error 1406: ...
I have tuple of (String,Bool) that need to be encoded to JSON Array in elm.
This below link is useful for the primitive types and other list, array and object. But I need to encode tuple2.
Refer : http://package.elm-lang.org/packages/elm-lang/core/4.0.3/Json-Encode#Value
I tried different approach like encoding tuple with toString function.
It does not gives me JSON Array instead it produces String as below "(\"r"\,False)".
JSON.Decoder expecting the input paramater to decode as below snippet.
decodeString (tuple2 (,) float float) "[3,4]"
Refer : http://package.elm-lang.org/packages/elm-lang/core/4.0.3/Json-Decode
Q : When there is decode function available for tuple2, why encode function is missing it.
You can build a generalized tuple size 2 encoder like this:
import Json.Encode exposing (..)
tuple2Encoder : (a -> Value) -> (b -> Value) -> (a, b) -> Value
tuple2Encoder enc1 enc2 (val1, val2) =
list [ enc1 val1, enc2 val2 ]
Then you can call it like this, passing the types of encoders you want to use for each slot:
tuple2Encoder string bool ("r", False)
In elm 0.19 https://package.elm-lang.org/packages/elm/json/latest/Json-Encode a generalized tuple 2 encoder would be
import Json.Encode exposing (list, Value)
tuple2Encoder : ( a -> Value ) -> ( b -> Value ) -> ( a, b ) -> Value
tuple2Encoder enc1 enc2 ( val1, val2 ) =
list identity [ enc1 val1, enc2 val2 ]
Usage:
encode 0 <| tuple2Encoder string int ("1",2)
I have an object that looks like this one:
object =
title : 'an object'
properties :
attribute1 :
random_number: 2
attribute_values:
a: 10
b: 'irrelevant'
attribute2 :
random_number: 4
attribute_values:
a: 15
b: 'irrelevant'
some_random_stuff: 'random stuff'
I want to extract the sum of the 'a' values on attribute1 and attribute2.
What would be the best way to do this in Coffeescript?
(I have already found one way to do it but that just looks like Java-translated-to-coffee and I was hoping for a more elegant solution.)
Here is what I came up with (edited to be more generic based on comment):
sum_attributes = (x) =>
sum = 0
for name, value of object.properties
sum += value.attribute_values[x]
sum
alert sum_attributes('a') # 25
alert sum_attributes('b') # 0irrelevantirrelevant
So, that does what you want... but it probably doesn't do exactly what you want with strings.
You might want to pass in the accumulator seed, like sum_attributes 0, 'a' and sum_attributes '', 'b'
Brian's answer is good. But if you wanted to bring in a functional programming library like Underscore.js, you could write a more succinct version:
sum = (arr) -> _.reduce arr, ((memo, num) -> memo + num), 0
sum _.pluck(object.properties, 'a')
total = (attr.attribute_values.a for key, attr of obj.properties).reduce (a,b) -> a+b
or
sum = (arr) -> arr.reduce((a, b) -> a+b)
total = sum (attr.attribute_values.a for k, attr of obj.properties)