Export (::Array{Expr,1} to .csv - csv

Does anyone know a good way to export the type:
(::Array{Expr,1}
To a .csv?
Here is an example of the Expression that I have in exp_out:
Expr[3]
[Decycler(es_close; n=224), 263, 262, 0.04, 62, 147, 104, 201, -3.5, 6.0, 4, 6, "Next_bar_open", "This_bar_on_close", 4.5, 1, 10, "short", 11, 14.0]
[HurstCoefficient(es_close; n=152, LPPeriod=131), 291, 130, 0.68, 6, 261, 243, 183, -2.0, 4.0, 6, 5, "Next_bar_open", "Next_bar_open", 3.0, 1, 1, "long", 23, 12.5]
[broadcast(/, ModifiedStochastic(es_close; n=293, HPPeriod=95, LPPeriod=139), RoofingFilterIndicator(es_close; LPPeriod=224, HPPeriod=57)), 222, 134, 0.42, 267, 156, 43, 112, -4.5, 1.0, 3, 2, "Next_bar_open", "This_bar_on_close", 17.5, 8, 1, "long", 26, 4.5]
My goal is to export the expresison to a .csv file so that full expression above is contained to one column on export.
using CSV
CSV.write("C:/Users/yoohoo/expr.csv", exp_out;delim=',')
It expands multiple columns on output as its separating by delim=',' Any ideas how to effectively export this cleanly to 1x column?
Output:
You see the delim=',' is separating so that my Expr encroaches on the other columns.
Expected output would not to have it splitting on delim=',' and have the full Expr in the first ID column.

Related

Azure Stream Analytics - JSON

I am trying to pass the below json through Azure Stream analytics to an Azure SQL server. The data is coming from the Azure IOT HUB and data is coming through happily.
nodes": {
"SN0013A20041E23697": {
"firmware_version": 5,
"transmission_count": 42,
"reserve_byte": 0,
"battery_level": 3.29406,
"type": 32,
"node_id": 0,
"rssi": 9,
"mass_concentration_pm_1_0": 0.88,
"mass_concentration_pm_2_5": 1.04,
"mass_concentration_pm_4_0": 1.13,
"mass_concentration_pm_10_0": 1.17,
"number_concentration_pm_0_5": 5.73,
"number_concentration_pm_1_0": 6.92,
"number_concentration_pm_2_5": 7.07,
"number_concentration_pm_4_0": 7.09,
"number_concentration_pm_10_0": 7.09,
"typical_particle_size": 0.48,
"humidity": 45.35,
"temperature": 20.84
},
"SN0013A20041E2367B": {
"firmware_version": 5,
"transmission_count": 43,
"reserve_byte": 0,
"battery_level": 2.99782,
"type": 32,
"node_id": 0,
"rssi": 16,
"mass_concentration_pm_1_0": 1.35,
"mass_concentration_pm_2_5": 1.43,
"mass_concentration_pm_4_0": 1.43,
"mass_concentration_pm_10_0": 1.43,
"number_concentration_pm_0_5": 9.13,
"number_concentration_pm_1_0": 10.77,
"number_concentration_pm_2_5": 10.83,
"number_concentration_pm_4_0": 10.83,
"number_concentration_pm_10_0": 10.83,
"typical_particle_size": 0.41,
"humidity": 45.72,
"temperature": 20.2
I can use a query like this and it will pass through one of the devices but not the other.
SELECT
"nodes"."SN0013A20041E23697"."temperature" as Temperature
, "nodes"."SN0013A20041E23697"."humidity" as Humidity
From input
Is there a way to pass through both devices in the same query?

Can’t set ticks in octave

It has been over a decade since I last used octave. I thought I still remembered some things. I confidently typed:
x = [1961,1962,1963, 1964, 1965,...
1966, 1967, 1968, 1969, 1970,...
1971, 1972, 1973, 1974, 1975,...
1976, 1977, 1978, 1979, 1980,...
1981, 1982, 1983, 1984, 1985,...
1986, 1987, 1988, 1989, 1990,...
1991, 1992, 1993, 1994, 1995,...
1996, 1997, 1998, ...
1999, 2000, 2001, 2002, 2003, ...
2004, 2005, 2006, 2007, 2008, ...
2009, 2010, 2011, 2012, 2013, ...
2014, 2015, 2016, 2017, 2018];
y = [1, 0, 0, 0, 0,...
0, 0, 0, 0, 0,...
0, 0, 0, 0, 0,...
0, 0, 0, 0, 0,...
0, 1, 0, 2, 0,...
1, 0, 1, 0, 1,...
2, 1, 0, 0, 0,...
0, 2, 2, ...
14, 21 ,23, 7, 7,...
1, 3, 5, 4, 3, ...
5, 5, 3, 7, 4, ...
7, 3, 3, 4, 4];
plot(x,y)
set(gca, 'xtick', 1960:5:2020);
But I just don’t get the tick spacing - one tick every 5 units - I desire. This is what comes out:
It must be some beginner‘s mistake. So I checked here and on numerous other pages, but I just don’t see it!
I am using the app Anoc for iPad, though I greatly doubt it is the app’s fault. ;)
(Disclaimer: I'm the developer of the Anoc app)
Anoc itself is only an editor (as the name Anoc Octave Editor says). Please keep in mind that there is no plot generation on the device.
As for your question: Please write a draw_plot command after the set() function and wrap the plot calls inside a hold on and hold off. This will fix the tick-issue.
x = [1961,1962,1963, 1964, 1965,...
1966, 1967, 1968, 1969, 1970,...
1971, 1972, 1973, 1974, 1975,...
1976, 1977, 1978, 1979, 1980,...
1981, 1982, 1983, 1984, 1985,...
1986, 1987, 1988, 1989, 1990,...
1991, 1992, 1993, 1994, 1995,...
1996, 1997, 1998, ...
1999, 2000, 2001, 2002, 2003, ...
2004, 2005, 2006, 2007, 2008, ...
2009, 2010, 2011, 2012, 2013, ...
2014, 2015, 2016, 2017, 2018];
y = [1, 0, 0, 0, 0,...
0, 0, 0, 0, 0,...
0, 0, 0, 0, 0,...
0, 0, 0, 0, 0,...
0, 1, 0, 2, 0,...
1, 0, 1, 0, 1,...
2, 1, 0, 0, 0,...
0, 2, 2, ...
14, 21 ,23, 7, 7,...
1, 3, 5, 4, 3, ...
5, 5, 3, 7, 4, ...
7, 3, 3, 4, 4];
hold on
plot(x,y)
set(gca, 'xtick', 1960:5:2020);
draw_plot
hold off

Cannot select fields beginning with numerics in jq (with references to other examples)

I have seen proposed solutions that worked for others in the following pages but I can't get it to work for me using the jqplay browser shell:
https://github.com/stedolan/jq/issues/344
https://github.com/stedolan/jq/issues/345
https://github.com/stedolan/jq/issues/1304
Given this data:
{
"api_version": 4,
"error": null,
"result": [
{
"labelId": "ALL",
"labelName": "ALL",
"samples": 30104,
"avgResponseTime": 6.849,
"90line": 8,
"95line": 9,
"99line": 36,
"minResponseTime": 2,
"maxResponseTime": 1951,
"avgLatency": 5.287,
"geoMeanResponseTime": 5.484,
"stDev": 23.765,
"duration": 302,
"avgBytes": 110.224,
"avgThroughput": 99.682,
"medianResponseTime": 5,
"errorsCount": 0,
"errorsRate": 0,
"hasLabelPassedThresholds": null
},
{
"labelId": "3687c89fac2385d28d53b356d4785418",
"labelName": "100b 3600s Cache",
"samples": 7300,
"avgResponseTime": 6.028,
"90line": 7,
"95line": 8,
"99line": 11,
"minResponseTime": 2,
"maxResponseTime": 680,
"avgLatency": 6.021,
"geoMeanResponseTime": 5.203,
"stDev": 16.233,
"duration": 300,
"avgBytes": 16.581,
"avgThroughput": 24.333,
"medianResponseTime": 5,
"errorsCount": 0,
"errorsRate": 0,
"hasLabelPassedThresholds": null
},
{
"labelId": "f88f8ff81bf9b521134637639a0277be",
"labelName": "100b NonCache",
"samples": 729,
"avgResponseTime": 6.143,
"90line": 7,
"95line": 7,
"99line": 9,
"minResponseTime": 3,
"maxResponseTime": 877,
"avgLatency": 6.136,
"geoMeanResponseTime": 4.627,
"stDev": 32.817,
"duration": 295,
"avgBytes": 1.64,
"avgThroughput": 2.471,
"medianResponseTime": 4,
"errorsCount": 0,
"errorsRate": 0,
"hasLabelPassedThresholds": null
}
]
}
I originally attempted:
[.result[] | {labelName: .labelName, samples: .samples, avgResponseTime: .avgResponseTime, 90line: .90line, 95line: .95line, 99line: .99line, minResponseTime: .minResponseTime, maxResponseTime: .maxResponseTime, avgLatency: .avgLatency, geoMeanResponseTime: .geoMeanResponseTime, stDev: .stDev, durationSeconds: .durationSeconds, avgBytes: .avgBytes, avgThroughput: .avgThroughput, medianResponseTime: .medianResponseTime, errorCount: .errorsCount, errorRate: .errorsRate, hasLabelPassedThresholds: .hasLabelPassedThresholds}]
and got
jq: error: syntax error, unexpected LITERAL (Unix shell quoting issues?) at <top-level>, line 1:
[.result[] | {labelName: .labelName, samples: .samples, avgResponseTime: .avgResponseTime, 90line: .90line, 95line: .95line, 99line: .99line, minResponseTime: .minResponseTime, maxResponseTime: .maxResponseTime, avgLatency: .avgLatency, geoMeanResponseTime: .geoMeanResponseTime, stDev: .stDev, durationSeconds: .durationSeconds, avgBytes: .avgBytes, avgThroughput: .avgThroughput, medianResponseTime: .medianResponseTime, errorCount: .errorsCount, errorRate: .errorsRate, hasLabelPassedThresholds: .hasLabelPassedThresholds}]
jq: 1 compile error
exit status 3
Looking at the similar questions asked in the prior links, I attempted to fix with queries like this:
[.result[] | {labelName: .labelName, samples: .samples, avgResponseTime: .avgResponseTime, 90line: .[“90line”], 95line: .[“95line”], 99line: .[“99line”], minResponseTime: .minResponseTime, maxResponseTime: .maxResponseTime, avgLatency: .avgLatency, geoMeanResponseTime: .geoMeanResponseTime, stDev: .stDev, durationSeconds: .durationSeconds, avgBytes: .avgBytes, avgThroughput: .avgThroughput, medianResponseTime: .medianResponseTime, errorCount: .errorsCount, errorRate: .errorsRate, hasLabelPassedThresholds: .hasLabelPassedThresholds}]
to deal with the fields beginning with numeric characters. I still get the same syntax errors, however, so I'm unable to line up my expectations with what I'm seeing in those other solutions. I just can't figure out what I'm doing wrong, I've tried all kinds of other quoting or trying stuff like | tostring to no avail.
EDIT: in response to a proposed answer below:
Hmm, I just can't get it. To recap:
{
"api_version": 4,
"error": null,
"result": [
{
"labelId": "ALL",
"labelName": "ALL",
"samples": 30104,
"avgResponseTime": 6.849,
"90line": 8,
"95line": 9,
"99line": 36,
"minResponseTime": 2,
"maxResponseTime": 1951,
"avgLatency": 5.287,
"geoMeanResponseTime": 5.484,
"stDev": 23.765,
"duration": 302,
"avgBytes": 110.224,
"avgThroughput": 99.682,
"medianResponseTime": 5,
"errorsCount": 0,
"errorsRate": 0,
"hasLabelPassedThresholds": null
}
]
}
jq bit:
jq '[.result[] | {labelName: .labelName, samples: .samples, avgResponseTime: .avgResponseTime, minResponseTime: .minResponseTime, maxResponseTime: .maxResponseTime, avgLatency: .avgLatency, geoMeanResponseTime: .geoMeanResponseTime, stDev: .stDev, durationSeconds: .durationSeconds, avgBytes: .avgBytes, avgThroughput: .avgThroughput, medianResponseTime: .medianResponseTime, errorCount: .errorsCount, errorRate: .errorsRate, hasLabelPassedThresholds: .hasLabelPassedThresholds, “90line”: .[“90line”], “95line”: .[“95line”], “99line”: .[“99line”]}]'
Returns:
jq: error: syntax error, unexpected INVALID_CHARACTER (Unix shell quoting issues?) at , line 1:
You need to quote the key names of keys that begin with a numeral, e.g.
"90line": .["90line"]
Note also that the jq expression {"90line": .["90line"]} can be abbreviated to just {"90line"}.
Example
With your input:
$ jq '[.result[] | {labelName, "90line": .["90line"] } ]' input.json
[
{
"labelName": "ALL",
"90line": 8
},
{
"labelName": "100b 3600s Cache",
"90line": 7
},
{
"labelName": "100b NonCache",
"90line": 7
}
]
If your JSON looks like this (in a file named your.json):
{
"banana": {
"9zz": true
}
}
And you want to grab 9zz with jq, do
jq '.banana["9zz"]' your.json
In other words, the two following lines are identical, but the bottom one works with values beginning with a number:
jq '.one.two' your.json
jq '.one["two"]' your.json

JSON Formatting error

I am getting this error while trying to import this JSON into google bigquery table
file-00000000: JSON table encountered too many errors, giving up. Rows: 1; errors: 1. (error code: invalid)
JSON parsing error in row starting at position 0 at file: file-00000000. Start of array encountered without start of object. (error code: invalid)
This is the JSON
[{'instrument_token': 11192834, 'average_price': 8463.45, 'last_price': 8471.1, 'last_quantity': 75, 'buy_quantity': 1065150, 'volume': 5545950, 'depth': {'buy': [{'price': 8471.1, 'quantity': 300, 'orders': 131072}, {'price': 8471.0, 'quantity': 300, 'orders': 65536}, {'price': 8470.95, 'quantity': 150, 'orders': 65536}, {'price': 8470.85, 'quantity': 75, 'orders': 65536}, {'price': 8470.7, 'quantity': 225, 'orders': 65536}], 'sell': [{'price': 8471.5, 'quantity': 150, 'orders': 131072}, {'price': 8471.55, 'quantity': 375, 'orders': 327680}, {'price': 8471.8, 'quantity': 1050, 'orders': 65536}, {'price': 8472.0, 'quantity': 1050, 'orders': 327680}, {'price': 8472.1, 'quantity': 150, 'orders': 65536}]}, 'ohlc': {'high': 8484.1, 'close': 8336.45, 'low': 8422.35, 'open': 8432.75}, 'mode': 'quote', 'sell_quantity': 998475, 'tradeable': True, 'change': 1.6151959167271395}]
http://jsonformatter.org/ also gives parse error for this JSON block. Need help understanding where the formatting is wrong - this is the JSON from a rest API
This is not valid JSON. JSON uses double quotes, not single quotes. Also, True should be true.
If I had to guess, I would guess that this is Python code being passed off as JSON. :-)
I suspect that even once this is made into correct JSON, it's not the format Google BigQuery is expecting. From https://cloud.google.com/bigquery/data-formats#json_format, it looks like you should have a text file with one JSON object per line. Try just this:
{"mode": "quote", "tradeable": true, "last_quantity": 75, "buy_quantity": 1065150, "depth": {"buy": [{"quantity": 300, "orders": 131072, "price": 8471.1}, {"quantity": 300, "orders": 65536, "price": 8471.0}, {"quantity": 150, "orders": 65536, "price": 8470.95}, {"quantity": 75, "orders": 65536, "price": 8470.85}, {"quantity": 225, "orders": 65536, "price": 8470.7}], "sell": [{"quantity": 150, "orders": 131072, "price": 8471.5}, {"quantity": 375, "orders": 327680, "price": 8471.55}, {"quantity": 1050, "orders": 65536, "price": 8471.8}, {"quantity": 1050, "orders": 327680, "price": 8472.0}, {"quantity": 150, "orders": 65536, "price": 8472.1}]}, "change": 1.6151959167271395, "average_price": 8463.45, "ohlc": {"close": 8336.45, "high": 8484.1, "open": 8432.75, "low": 8422.35}, "instrument_token": 11192834, "last_price": 8471.1, "sell_quantity": 998475, "volume": 5545950}
OP has a valid JSON record but that wouldn't work with Biq Query, and here's why:
Google Big Query supports, JSON objects {}, one object per line. Check this out.
This basically means that you cannot supply list [] as json records and expect Big Query to detect it. You must always have one json object per line.
Here's a quick reference to what I am saying.
and there are more.
at last,
I highly recommend you read up the below and check out the link for more information on different forms of JSON structures, read this from the json.org

Looping through all json elements using Unity Boomlagoon Json

I'm using Boomlagoon Json in my Unity project. My Json file has several lines in it, and so far I can only get Boomlagoon to read the first one only. Is there a way I can make a loop where it will go through all parse the entire json file?
Here is my json:
{"type": 1, "squads": [{"player_id": 1, "squad": [1, 2, 3, 4]}, {"player_id": 2, "squad": [6, 7, 8, 9]}], "room_number": 1, "alliance_id": 1, "level": 1}
{"type": 2, "squads": [{"player_id": 2, "squad": [1, 2, 3, 4]}, {"player_id": 3, "squad": [6, 7, 8, 9]}], "room_number": 2, "alliance_id": 1, "level": 1}
{"type": 3, "squads": [{"player_id": 3, "squad": [1, 2, 3, 4]}, {"player_id": 4, "squad": [6, 7, 8, 9]}], "room_number": 3, "alliance_id": 1, "level": 1}
And when I do a loop like this:
foreach (KeyValuePair<string, JSONValue> pair in emptyObject) { ... }
it only gives me results for the first entry (in this example type:1). Thanks.
Your file actually contains 3 JSON objects, and what happens when you parse it is that the parsing stops once the first object ends. You need to parse each line separately to get all of the data.
As an aside, you'll notice that if you paste your JSON into the validator at jsonlint.com it'll give you a parsing error where the second object begins.