I am trying to use Zapier to add an email to a SendFox mailing list when a new user gets added to a specified Firestore path. It's asking me for a structured query to find this data.
I am using the one that it's suggesting, but new users aren't being added correctly. My concern is that the structured query isn't set up correctly.
My data is structured as follows:
- Customers (Collection)
- [User ID] (Document)
- EMAIL
- Other Info
- Other Info...
Whenever a new UserID document is added, I'm trying to access the email field in Zapier.
This is the current structured query:
"orderBy": [{
"field": {
"fieldPath": "email"
},
"direction": "DESCENDING"
}]
What is the correct structured query (json) to access the document I'm looking for?
To clarify, it's the latest document in the "customers" collection with the field "email".
I'm not looking fo any javascript code to get this data, merely the correct json structure for this query.
The query you have right now returns all documents from the customers collection in descending order of their email address, so with the zs first. That seems to be unlikely what you want.
Firestore has no built in concept of the most recent document, so what you'll want to do is:
Add a timestamp field to each document that you set to the current time (preferably a server-side timestamp, but client-side will probably work too).
Sort the query on descending values of that new timestamp field.
I need to append data to a new column of a spreadsheet, every day.
But I want to make it automatically, just like spreadsheets.values.append does: but for columns.
spreadsheets.values.append will only append data to new rows, not columns!
I have tried these params:
majorDimension does work for me:
Invalid JSON payload received. Unknown name "majorDimension": Cannot bind query parameter. Field 'majorDimension' could not be found in request message.
InsertDataOption doesn't seem to make any difference
I'm sending data to a named range called "foo". When foo is already filled, the API places data at the bottom. I need the data to be place to the right.
You could push each element of the new column into each row of the 2d array with something like this: https://stackoverflow.com/a/68886835/7215091 In that case I used splice but you could probably use push instead.
I have a dataset with a column containing arrays of JSON data that looks like:
[{"name":"aaa","type":"yyy"},{"name":"bbb","type":"ccc"}]
or more specifically:
dataset with JSON array column
Is there any straight forward method of extracting the JSON data from the column using something like JSON_QUERY, so that I can use it in a report
As far as I can tell, the existing JSON array format is not usable with any of the T-SQL JSON functions.
The array in the column "jsonCol" needs to be in the form of:
{ "tag": [{"name":"aaa","type":"yyy"},{"name":"bbb","type":"ccc"}]}
and then I can extract each array element individually with:
SELECT JSON_QUERY(jsonCol, '$.tag[0]') as tag
FROM
So I could add a prefix and suffix string to the select statement to fix this as long as no one else will see it.
As part of a tool I am creating for my team I am connecting to an internal web service via PowerQuery.
The web service returns nested JSON, and I have trouble parsing the JSON data to the format I am looking for. Specifically, I have a problem with extracting the content of records in a column to a comma separated list.
The data
As you can see, the data contains details related to a specific "race" (race_id). What I want to focus on is the information in the driver_codes which is a List of Records. The amount of records varies from 0 to 4 and each record is structured as id: 50000 (50000 could be any 5 digit number). So it could be:
id: 10000
id: 20000
id: 30000
As requested, an example snippet of the raw JSON:
<race>
<race_id>ABC123445</race_id>
<begin_time>2018-03-23T00:00:00Z</begin_time>
<vehicle_id>gokart_11</vehicle_id>
<driver_code>
<id>90200</id>
</driver_code>
<driver_code>
<id>90500</id>
</driver_code>
</race>
I want it to be structured as:
10000,20000,30000
The problem
When I choose "Extract values" on the column with the list, then I get the following message:
Expression.Error: We cannot convert a value of type Record to type
Text.
If I instead choose "Expand to new rows", then duplicate rows are created for each unique driver code. I now have several rows per unique race_id, but what I wanted was one row per unique race_id and a concatenated list of driver codes.
What I have tried
I have tried grouping the data by the race_id, but the operations allowed when grouping data do not include concatenating rows.
I have also tried unpivoting the column, but that leaves me with the same problem: I still get multiple rows.
I have googled (and Stack Overflowed) this issue extensively without luck. It might be that I am using the wrong keywords, however, so I apologize if a duplicate exists.
UPDATE: What I have tried based on the answers so far
I tried Alexis Olson's excellent and very detailed method, but I end up with the following error:
Expression.Error: We cannot convert the value "id" to type Number. Details:
Value=id
Type=Type
The error comes from using either of these lines of M code (one with a List.Transform and one without):
= Table.Group(#"Renamed Columns", {"race_id", "begin_time", "vehicle_id"},
{{"DriverCodes", each Text.Combine([driver_code][id], ","), type text}})
= Table.Group(#"Renamed Columns", {"race_id", "begin_time", "vehicle_id"},
{{"DriverCodes", each Text.Combine(List.Transform([driver_code][id], each Number.ToText(_)), ","), type text}})
NB: if I do not write [driver_code][id] but only [id] then I get another error saying that column [id] does not exist.
Here's the JSON equivalent to the XML example you gave:
{"race": {
"race_id": "ABC123445",
"begin_time": "2018-03-23T00:00:00Z",
"vehicle_id": "gokart_11",
"driver_code": [
{ "id": "90200" },
{ "id": "90500" }
]}}
If you load this into the query editor, convert it to a table, and expand out the Value record, you'll have a table that looks like this:
At this point, choose Expand to New Rows, and then expand the id column so that your table looks like this:
At this point, you can apply the trick #mccard suggested. Group by the first columns and aggregate over the last using, say, max.
This last step produces M code like this:
= Table.Group(#"Expanded driver_code1",
{"Name", "race_id", "begin_time", "vehicle_id"},
{{"id", each List.Max([id]), type text}})
Instead of this, you want to replace List.Max with Text.Combine as follows:
= Table.Group(#"Changed Type",
{"Name", "race_id", "begin_time", "vehicle_id"},
{{"id", each Text.Combine([id], ","), type text}})
Note that if your id column is not in the text format, then this will throw an error. To fix this, insert a step before you group rows using Transform Tab > Data Type: Text to convert the type. Another options is to use List.Transform inside your Text.Combine like this:
Text.Combine(List.Transform([id], each Number.ToText(_)), ",")
Either way, you should end up with this:
An approach would be to use the Advanced Editor and change the operation done when grouping the data directly there in the code.
First, create the grouping using one of the operations available in the menu. For instance, create a column"Sum" using the Sum operation. It will give an error, but we should get the starting code to work on.
Then, open the Advanced Editor and find the code corresponding to the operation. It should be something like:
{{"Sum", each List.Sum([driver_codes]), type text}}
Change it to:
{{"driver_codes", each Text.Combine([driver_codes], ","), type text}}
In a Data Flow, I have a ADO NET Source which load a table like this:
PersonID, Email
1, "john#hotmail.com"
1, "john_job#yahoo.com"
2, "susan#gmail.com"
2, "sus2010#hotmail.com"
I need to merge emails from each persons and get a result like this:
PersonID, EmailsArray
1, "john#hotmail.com,john_job#yahoo.com"
2, "susan#gmail.com,sus2010#hotmail.com"
How to I do it? Using derived column? a script component? a foreach loop? (in Data Flow doesn't exist). Thanks in advance.
Use an asynchronous script component with something like the following logic:
Sort your data on the ID column.
In the script component, declare a variable that keeps track of the previous id, assign it to the ID column of your input buffer at the end of your script.
For each row in the input buffer, concatenate the email field to a string variable.
Check if the previous ID is equal to the current ID (coming from your input buffer). If it is different, add a row to the output buffer with the previous ID and the concatenated string. Reset the string as empty.
MSDN