I am using Chartist.js to generate a chart dynamically using json data derived from multiple text inputs. The chart itself is generated with the proper labels series but no data points show, just the x and y axis. I've discovered that chartist needs an additional set of brackets '[ ]' around the data for it too work.
My json data looks like this:
{labels: ['07:00', '10:00', '12:00', '14:00'],
series: [333, 444, 322, 222]}
Chartist needs the series data to be written like this: series: [[333, 444, 322, 222]]
I am initiating Chartist like this:
new Chartist.Line('.ct-chart', data);
I need a bit of help in getting the extra brackets around the generated series data. I have searched for a method but all I can find are examples where the data is already set, not generated.
You need to split the series from your data, and assign them in their respective places.
var dataJson = {labels: ['07:00', '10:00', '12:00', '14:00'],
series: [333, 444, 322, 222]}
new Chartist.Line('.ct-chart', {
labels: dataJson.labels,
series: [dataJson.series]
}, {
fullWidth: true,
chartPadding: {
right: 40
}
});
Let me know if it worked.
Related
I am using ObservableHQ and vega lite API to do data visualizations and have faced a problem I can't figure out.
The problem is that, I would like to access data object from the following data structure,
Array
Array
Array
Item
Item
Array
As you can see in my bad drawing, I have a multidimensional array and would like to access a specific array from the main array. How can I do that using Vegalite API?
vl.markCircle({
thickness: 4,
bandSize: 2
})
.data(diff[0])
.encode(
vl.x().fieldQ("mins").scale({ domain: [-60, 60] }),
vl.color().fieldN('type').scale({ range: ['#636363', '#f03b20'] }),
)
.config({bandSize: 10})
.width(600)
.height(40)
.render()
Thank you,
Based on your comments, I’m assuming that you’re trying to automatically chart all of the nested arrays (separately), not just one of them. And based on your chart code, I’m assuming that your data looks sorta like this:
const diff = [
[
{ mins: 38, type: "Type B" },
{ mins: 30, type: "Type B" },
{ mins: 28, type: "Type A" },
…
],
[
{ mins: 20, type: "Type B" },
{ mins: 17, type: "Type A" },
{ mins: 19, type: "Type A" },
…
],
…
];
First, flatten all the arrays into one big array, and record which array each came from with a new array property on the item object, with flatMap. If each child array represents, say, a different city, or a different year, or a different person collecting the data, you could replace array: i with something more meaningful about the data.
const flat = diff.flatMap((arr, i) => arr.map((d) => ({ ...d, array: i })));
Then use Vega-Lite’s “faceting” (documentation, Observable tutorial and examples) to make split the chart into sections, one for each value of array: i, with shared scales. This just adds one line to your example:
vl
.markCircle({
thickness: 4,
bandSize: 2
})
.data(flat)
.encode(
vl.row().fieldN("array"), // this line is new
vl
.x()
.fieldQ("mins")
.scale({ domain: [-60, 60] }),
vl
.color()
.fieldN("type")
.scale({ range: ["#636363", "#f03b20"] })
)
.config({ bandSize: 10 })
.width(600)
.height(40)
.render()
Here’s an Observable notebook with examples of this working. As I show there at the bottom, you can also map over your array to make a totally separate chart for each nested array.
I want to load data from an external CSV-file to a Highcharts sankey diagram. After trying several options, I am not sure if this is even possible, as the result is always an empty chart? The CSV-file will be on the same server in the final version.
As a simple case, see the fiddle https://jsfiddle.net/oy095kzb/
which is merely copy/paste from the official Highcharts sankey example (where data is included in the code), except that data-module is loaded and csvURL is used instead:
series: [{
keys: ['from', 'to', 'weight'],
data: {
csvURL:'https://www.test.basleratlas.ch/sankey_test.csv'
},
type: 'sankey',
name: 'Sankey demo series'
}]
CSV-file-structure:
'from', 'to', 'weight'
'Brazil', 'Portugal', 5
'Brazil', 'France', 1
'Brazil', 'Spain', 1
'Brazil', 'England', 1
'Canada', 'Portugal', 1
...
Notice that the data.csvURL feature is using to fetch the data from the CSV which is stored on the server, like this link: https://demo-live-data.highcharts.com/vs-load.csv, meanwhile it seems that your link downloading the CSV file. Next notice that your CSV file doesn't have defined column names.
EDIT
Notice that the data object should be defined outside the series object config.
Also, I used the complete callback to parse the data into the proper format.
Demo: https://jsfiddle.net/BlackLabel/qLu37548/
API: https://api.highcharts.com/highcharts/data.complete
My CSV file has three columns:
Date, Values1, Values2
1880.0417, -183.0, 24.2
1880.1250, -171.1, 24.2
1880.2083, -164.3, 24.2
of which I want to display only the second one (Values1) as a line (chart).
I could prepare a CSV via Excel with only that and the date column. But due to ongoing work with the file, it would be much easier to get that CSV parsed while ignoring the second value.
Is that possible? I tried it with using the »series« parameter - but in vain.
Thanks a lot for any hints!
You can use seriesMapping property:
data: {
...,
seriesMapping: [{
x: 0,
y: 1
}, {}]
}
Live demo: https://jsfiddle.net/BlackLabel/tyLahrow/
API Reference: https://api.highcharts.com/gantt/data.seriesMapping
I have a file composed of a single array containing multiple records.
{
"Client": [
{
"ClientNo": 1,
"ClientName": "Alpha",
"ClientBusiness": [
{
"BusinessNo": 1,
"IndustryCode": "12345"
},
{
"BusinessNo": 2,
"IndustryCode": "23456"
}
]
},
{
"ClientNo": 2,
"ClientName": "Bravo",
"ClientBusiness": [
{
"BusinessNo": 1,
"IndustryCode": "34567"
},
{
"BusinessNo": 2,
"IndustryCode": "45678"
}
]
}
]
}
I load it with the following code:
create or replace stage stage.test
url='azure://xxx/xxx'
credentials=(azure_sas_token='xxx');
create table if not exists stage.client (json_data variant not null);
copy into stage.client_test
from #stage.test/client_test.json
file_format = (type = 'JSON' strip_outer_array = true);
Snowflake imports the entire file as one row.
I would like the the COPY INTO command to remove the outer array structure and load the records into separate table rows.
When I load larger files, I hit the size limit for variant and get the error Error parsing JSON: document is too large, max size 16777216 bytes.
If you can import the file into Snowflake, into a single row, then you can use LATERAL FLATTEN on the Clients field to generate one row per element in the array.
Here's a blog post on LATERAL and FLATTEN (or you could look them up in the snowflake docs):
https://support.snowflake.net/s/article/How-To-Lateral-Join-Tutorial
If the format of the file is, as specified, a single object with a single property that contains an array with 500 MB worth of elements in it, then perhaps importing it will still work -- if that works, then LATERAL FLATTEN is exactly what you want. But that form is not particularly great for data processing. You might want to use some text processing script to massage the data if that's needed.
RECOMMENDATION #1:
The problem with your JSON is that it doesn't have an outer array. It has a single outer object containing a property with an inner array.
If you can fix the JSON, that would be the best solution, and then STRIP_OUTER_ARRAY will work as you expected.
You could also try to recompose the JSON (an ugly business) after reading line for line with:
CREATE OR REPLACE TABLE X (CLIENT VARCHAR);
COPY INTO X FROM (SELECT $1 CLIENT FROM #My_Stage/Client.json);
User Response to Recommendation #1:
Thank you. So from what I gather, COPY with STRIP_OUTER_ARRAY can handle a file starting and ending with square brackets, and parse the file as if they were not there.
The real files don't have line breaks, so I can't read the file line by line. I will see if the source system can change the export.
RECOMMENDATION #2:
Also if you would like to see what the JSON parser does, you can experiment using this code, I have parsed JSON on the copy command using similar code. Working with your JSON data in small project can help you shape the Copy command to work as intended.
CREATE OR REPLACE TABLE SAMPLE_JSON
(ID INTEGER,
DATA VARIANT
);
INSERT INTO SAMPLE_JSON(ID,DATA)
SELECT
1,parse_json('{
"Client": [
{
"ClientNo": 1,
"ClientName": "Alpha",
"ClientBusiness": [
{
"BusinessNo": 1,
"IndustryCode": "12345"
},
{
"BusinessNo": 2,
"IndustryCode": "23456"
}
]
},
{
"ClientNo": 2,
"ClientName": "Bravo",
"ClientBusiness": [
{
"BusinessNo": 1,
"IndustryCode": "34567"
},
{
"BusinessNo": 2,
"IndustryCode": "45678"
}
]
}
]
}');
SELECT
C.value:ClientNo AS ClientNo
,C.value:ClientName::STRING AS ClientName
,ClientBusiness.value:BusinessNo::Integer AS BusinessNo
,ClientBusiness.value:IndustryCode::Integer AS IndustryCode
from SAMPLE_JSON f
,table(flatten( f.DATA,'Client' )) C
,table(flatten(c.value:ClientBusiness,'')) ClientBusiness;
User Response to Recommendation #2:
Thank you for the parse_json example!
Trouble is, the real files are sometimes 500 MB, so the parse_json function chokes.
Follow-up on Recommendation #2:
The JSON needs to be in the NDJSON http://ndjson.org/ format. Otherwise the JSON will be impossible to parse because of the potential for large files.
Hope the above helps other running into similar questions!
I'm trying to take data in from a JSON file and link it to my geoJSON file to create a choropleth map with the county colours bound to the "amount" value but also I would like a corresponding "comment" value to be bound to a div for when I mouseover that county.
My code at http://bl.ocks.org/eoiny/6244102 will work to generate a choropleth map when my counties.json data is in the form:
"Carlow":3,"Cavan":4,"Clare":5,"Cork":3,
But things get tricky when I try to use the following form:
{
"id":"Carlow",
"amount":11,
"comment":"The figures for Carlow show a something." },
I can't get my head around how join the "id": "Carlow" from counties.json and "id": "Carlow" path created from ireland.json, while at the same time to have access to the other values in counties.json i.e. "amount" and "comment".
Apologies for my inarticulate question but if anyone could point me to an example or reference I could look up that would be great.
I would preprocess the data when it's loaded to make lookup easier in your quantize function. Basically, replace this: data = json; with this:
data = json.reduce(function(result, county) {
result[county.id] = county;
return result;
}, {});
and then in your quantize function, you get at the amounts like this:
function quantize(d) {
return "q" + Math.min(8, ~~(data[d.id].amount * 9 / 12)) + "-9";
}
What the preprocessing does is turn this array (easily accessed by index):
[{id: 'xyz', ...}, {id: 'pdq', ...}, ...]
into this object with county keys (easily accessed by county id):
{'xyz': {id: 'xyz', ...}, 'pdq': {id: 'pdq', ...}, ...}
Here's the working gist: http://bl.ocks.org/rwaldin/6244803