How to show multiple lines in google-chart linr-graph - polymer

I have polymer code like this:
<google-chart type='line' options='{"title": "Sales Statistics, Billions", "vAxis": {"minValue" : 0, "maxValue": 40}, "curveType": "function"}' rows='[["Monday", 31], ["Tuesday", 28], ["Wednesday", 31], ["thursday", 22], ["friday", 11]]' cols='[{"label":"Weeks", "type":"string"}, {"label":"Days", "type":"number"}]'></google-chart>
The code works fine. But, I don't know how to show multiple lines of data.
That means my output is
I want like this:
Thanks.

I found solution for my problem. Add another element to cols array as
cols='[{"label":"Weeks", "type":"string"}, {"label":"Days", "type":"number"}, {"label":"Days", "type":"number"}]'
And add another element to array of the rows, which acts as co-ordinates for another line in the graph as
rows='[["Monday", 31, 11], ["Tuesday", 28, 22], ["Wednesday", 31, 33], ["thursday", 22, 44], ["friday", 11, 44]]'
Thanks.

Related

Integrating Non-Observation Frame Data with Different Dimensionality in Reinforcement Learning

I am trying to understand a conceptual approach to integrating data into a stack of observation frames that don't have the same dimensionality as the frames.
Example Frame: [1, 2, 3]
Example extra data: [a, b]
Currently, I am approaching this as follows, with the example of 3 frames (rows) representing temporal observation data over 3 time periods, and a 4th frame (row) representing non-temporal data for which only the most recent observed values are needed.
Example:
[
[1, 2, 3],
[4, 5, 6],
[7, 8, 9],
[a, b, NaN]
]
The a and b are the added data and the NaN is just a value added to match the dimensions of the existing data. Would there be differences (all inputs welcomed) in using NaN vs an outlier value like -1 that would never be observed by other measures?
One possible alternative would be to structure the observation data as such:
[
[1, 2, 3, a, b],
[4, 5, 6, a-1, b-1],
[7, 8, 9, a-2, b-3],
]
It seems this would be a noticeable increase in resources and the measures (in my context) of a and b can be universally understood as "bigger is better" or "smaller is better" without context from the other data values.

How to format json data for PowerBI (Error: "Additional characters were found at the end of the JSON entry.")

I am sending data from Node-RED to the Azure Blob Storage and use the Storage Account as a Data Source for Power BI. The messages that arrive to Node-RED are being saved in a json file (every day), which will be uploaded to the cloud at the end of the day. I have a problem with the format of these messages because I get this error when I try to edit the data in Power BI:
Details: "Additional characters were found at the end of the JSON entry."
Unfortunately I am inexperienced in this topic and don't know how to fix it.
These are the messages in my file:
{"timestamp": "2019-05-11T22:39:13.908347", "current_ma": 22, "voltage_mv": 229979, "energy_wh": 15, "power_mw": 0}
{"timestamp": "2019-05-11T22:39:18.843627", "current_ma": 22, "voltage_mv": 230069, "energy_wh": 15, "power_mw": 0}
{"timestamp": "2019-05-11T22:39:23.935679", "current_ma": 22, "voltage_mv": 229988, "energy_wh": 15, "power_mw": 0}
{"timestamp": "2019-05-11T22:39:28.865907", "current_ma": 21, "voltage_mv": 230048, "energy_wh": 15, "power_mw": 0}
{"timestamp": "2019-05-11T22:39:33.810613", "current_ma": 21, "voltage_mv": 230081, "energy_wh": 15, "power_mw": 0}
I used to have each one of these messages in one file but it was taking too much space, so I am guessing the problem has something to do with joining all the messages in one file. Maybe the separation between them?
I assume issue may be resolved: There is a similar thread discussion https://community.powerbi.com/t5/Power-Query/How-to-format-json-data-for-PowerBI-Error-Additional-characters/m-p/689642#M23195
If the issue still persist, I would request, Please feel to contact us again

Get JSON's attribute value in Chatterbot and Django integration

statement.text in chatterbot and Django integration returns
{'text': u'How are you doing?', 'created_at': datetime.datetime(2017, 2, 20, 7, 37, 30, 746345, tzinfo=<UTC>), 'extra_data': {}, 'in_response_to': [{'text': u'Hi', 'occurrence': 3}]}
I want a value of text attribute so that it prints How are you doing?
The chatterbot return the json object(dict) so you can use the dictionary operations like following
[1]: data = {'text': u'How are you doing?', 'created_at': datetime.datetime(2017, 2, 20, 7, 37, 30, 746345, tzinfo=<UTC>), 'extra_data': {}, 'in_response_to': [{'text': u'Hi', 'occurrence': 3}]}
[2]: data['text'] or data.get('text')[this approch is good].
What you got is dictionary. Value of dictionary can be obtained by get() function. You can also use dict['text'], but it does not perform error checking. get function returns None if key is not present.

Creating JSON data from string and using json.dumps

I am trying to create JSON data to pass to InfluxDB. I create it using strings but I get errors. What am I doing wrong. I am using json.dumps as has been suggested in various posts.
Here is basic Python code:
json_body = "[{'points':["
json_body += "['appx', 1, 10, 0]"
json_body += "], 'name': 'WS1', 'columns': ['RName', 'RIn', 'SIn', 'OIn']}]"
print("Write points: {0}".format(json_body))
client.write_points(json.dumps(json_body))
The output I get is
Write points: [{'points':[['appx', 1, 10, 0]], 'name': 'WS1', 'columns': ['RName', 'RIn', 'SIn', 'OIn']}]
Traceback (most recent call last):
line 127, in main
client.write_points(json.dumps(json_body))
File "/usr/local/lib/python2.7/dist-packages/influxdb/client.py", line 173, in write_points
return self.write_points_with_precision(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/influxdb/client.py", line 197, in write_points_with_precision
status_code=200
File "/usr/local/lib/python2.7/dist-packages/influxdb/client.py", line 127, in request
raise error
influxdb.client.InfluxDBClientError
I have tried with double quotes too but get the same error. This is stub code (to minimize the solution), I realize in the example the points list contains just one list object but in reality it contains multiple. I am generating the JSON code reading through outputs of various API calls.
json_body = '[{\"points\":['
json_body += '[\"appx\", 1, 10, 0]'
json_body += '], \"name\": \"WS1\", \"columns\": [\"RName\", \"RIn\", \"SIn\", \"OIn\"]}]'
print("Write points: {0}".format(json_body))
client.write_points(json.dumps(json_body))
I understand if I used the below things would work:
json_body = [{ "points": [["appx", 1, 10, 0]], "name": "WS1", "columns": ["Rname", "RIn", "SIn", "OIn"]}]
You don't need to create JSON manually. Just pass an appropriate Python structure into write_points function. Try something like that:
data = [{'points':[['appx', 1, 10, 0]],
'name': 'WS1',
'columns': ['RName', 'RIn', 'SIn', 'OIn']}]
client.write_points(data)
Please visit JSON.org for proper JSON structure. I can see several errors with your self-generated JSON:
The outer-most item can be an unordered object, enclosed by curly braces {}, or an ordered array, enclosed by brackets []. Don't use both. Since your data is structured like a dict, the curly braces are appropriate.
All strings need to be enclosed in double quotes, not single. "This is valid JSON". 'This is not valid'.
Your 'points' value array is surrounded by double brackets, which is unnecessary. Only use a single set.
Please check out the documentation of the json module for details on how to use it. Basically, you can feed json.dumps() your Python data structure, and it will output it as valid JSON.
In [1]: my_data = {'points': ["appx", 1, 10, 0], 'name': "WS1", 'columns': ["RName", "RIn", "SIn", "OIn"]}
In [2]: my_data
Out[2]: {'points': ['appx', 1, 10, 0], 'name': 'WS1', 'columns': ['RName', 'RIn', 'SIn', 'OIn']}
In [3]: import json
In [4]: json.dumps(my_data)
Out[4]: '{"points": ["appx", 1, 10, 0], "name": "WS1", "columns": ["RName", "RIn", "SIn", "OIn"]}'
You'll notice the value of using a Python data structure first: because it's Python, you don't need to worry about single vs. double quotes, json.dumps() will automatically convert them. However, building a string with embedded single quotes leads to this:
In [5]: op_json = "[{'points':[['appx', 1, 10, 0]], 'name': 'WS1', 'columns': ['RName', 'RIn', 'SIn', 'OIn']}]"
In [6]: json.dumps(op_json)
Out[6]: '"[{\'points\':[[\'appx\', 1, 10, 0]], \'name\': \'WS1\', \'columns\': [\'RName\', \'RIn\', \'SIn\', \'OIn\']}]"'
since you fed the string to json.dumps(), not the data structure.
So next time, don't attempt to build JSON yourself, rely on the dedicated module to do it.

pm3d in gnuplot with binary data

I have some data files with content
a1 b1 c1 d1
a1 b2 c2 d2
...
[blank line]
a2 b1 c1 d1
a2 b2 c2 d2
...
I plot this with gnuplot using
splot 'file' u 1:2:3:4 w pm3d.
Now, I want to use a binary file. I created the file with Fortran using unformatted stream-access (direct or sequential access did not work directly). By using gnuplot with
splot 'file' binary format='%float%float%float%float' u 1:2:3
I get a normal 3D-plot. However, the pm3d-command does not work as I don't have the blank lines in the binary file. I get the error message:
>splot 'file' binary format='%float%float%float%float' u 1:2:3:4 w pm3d
Warning: Single isoline (scan) is not enough for a pm3d plot.
Hint: Missing blank lines in the data file? See 'help pm3d' and FAQ.
According to the demo script in http://gnuplot.sourceforge.net/demo/image2.html, I have to specify the record length (which I still don't understand right). However, using this script from the demo page and the command with pm3d obtains the same error message:
splot 'scatter2.bin' binary record=30:30:29:26 u 1:2:3 w pm3d
So how is it possible to plot this four dimensional data from a binary file correctly?
Edit: Thanks, mgilson. Now it works fine. Just for the record: My fortran code-snippet:
open(unit=83,file=fname,action='write',status='replace',access='stream',form='unformatted')
a= 0.d0
b= 0.d0
do i=1,200
do j=1,100
write(83)real(a),real(b),c(i,j),d(i,j)
b = b + db
end do
a = a + da
b = 0.d0
end do
close(83)
The gnuplot commands:
set pm3d map
set contour
set cntrparam levels 20
set cntrparam bspline
unset clabel
splot 'fname' binary record=(100,-1) format='%float' u 1:2:3:4 t 'd as pm3d-projection, c as contour'
Great question, and thanks for posting it. This is a corner of gnuplot I hadn't spent much time with before. First, I need to generate a little test data -- I used python, but you could use fortran just as easily:
Note that my input array (b) is just a 10x10 array. The first two "columns" in the datafile are just the index (i,j), but you could use anything.
>>> import numpy as np
>>> a = np.arange(10)
>>> b = a[None,:]+a[:,None]
>>> b
array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9],
[ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10],
[ 2, 3, 4, 5, 6, 7, 8, 9, 10, 11],
[ 3, 4, 5, 6, 7, 8, 9, 10, 11, 12],
[ 4, 5, 6, 7, 8, 9, 10, 11, 12, 13],
[ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14],
[ 6, 7, 8, 9, 10, 11, 12, 13, 14, 15],
[ 7, 8, 9, 10, 11, 12, 13, 14, 15, 16],
[ 8, 9, 10, 11, 12, 13, 14, 15, 16, 17],
[ 9, 10, 11, 12, 13, 14, 15, 16, 17, 18]])
>>> with open('foo.dat','wb') as foo:
... for (i,j),dat in np.ndenumerate(b):
... s = struct.pack('4f',i,j,dat,dat)
... foo.write(s)
...
So here I just write 4-floating point values to the file for each data-point. Again, this is what you've already done using fortran. Now for plotting it:
splot 'foo.dat' binary record=(10,-1) format='%float' u 1:2:3:4 w pm3d
I believe that this specifies that each "scan" is a "record". Since I know that each scan will be 10 floats long, that becomes the first index in the record list. The -1 indicates that gnuplot should keep reading records until it finds the end of the file.