D3 getting data from multiple column CSV - csv

I'm following this example for a multi-line graph
I want to plot each forecast in my CSV using values for years 2010, 2011, 2012.
forecast.csv
forecast,2010,2011,2012
Outlook,87,88,88
Reform,50,20,88
Renewal,43,21,88
If my data was simple like the example link, the code to build the chart would look like this:
var priceline = d3.svg.line()
.x(function(d) { return x(d.year); })
.y(function(d) { return y(d.value); });
var dataNest = d3.nest()
.key(function(d) {return d.forecast;})
.entries(data);
dataNest.forEach(function(d) {
svg.append("path")
.attr("class", "line")
.attr("d", priceline(d.values));
console.log(dataNest)
});
However my data is coming from a multi column CSV.
I'm trying to nest the forecasts, so each forecast would have an array of year and value pairs. i.e
[0] Object
[key] Outlook
[values]
[0] year: 2010
value: 28
[1] year: 2011
value: 88
but dataNest currently looks like this
[0] Object
[key] Outlook
[values]
[0] 2010: 87
2011: 88
2012: 88
There are many other years in the real data so transposing is not an option. How can I draw a line from this multi column CSV data?

You can change your data format or modify your d3.nest function. However, I reckon the easiest solution is using plain JavaScript to modify your array:
dataNest.forEach(function(d) {
d.values.forEach(function(e) {
var myArr = [];
for (var key in e) {
if (e[key] != d.key) {
myArr.push({
"year": key,
"value": e[key]
});
}
}
d.values = myArr;
})
})
Here is a demo, using the data in your question:
var data = d3.csv.parse(d3.select("#csv").text());
var dataNest = d3.nest()
.key(function(d) {
return d.forecast;
})
.entries(data);
dataNest.forEach(function(d) {
d.values.forEach(function(e) {
var myArr = [];
for (var key in e) {
if (e[key] != d.key) {
myArr.push({
"year": key,
"value": e[key]
});
}
}
d.values = myArr;
})
})
console.log(dataNest)
pre {
display: none;
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/d3/3.4.11/d3.min.js"></script>
<pre id="csv">forecast,2010,2011,2012
Outlook,87,88,88
Reform,50,20,88
Renewal,43,21,88</pre>

Related

Search items in external JSON

I have the URL of a JSON file and I want to get all the items with the same value.
Example:
http://sampleurl.com has this JSON
`{
"posts":[
{
"authors":[
{
{"name":"John",
"age": 30
},
{"name":"John",
"age": 35
}
}
]
}
]
}`
What I want to do is to list all those authors with the same name together with their age.
I have tried this with no success:
`var allposts = "http://sampleurl.com";
$.each(allposts.posts.authors, function(i, v) {
if (v.name == "John") {
alert("Ok");
return;
}
});`
Thanks
You need to get the data via an Ajax call - $.getJSON:
const authors = {};
$.getJSON( "http://sampleurl.com", data =>
data.posts.authors.forEach(author => {
authors[author.name] = authors[author.name] || []
authors[author.name].push(author)
});
);
At the end you have an object keyed on unique author names, with each key containing as its value an array of the authors with that name. You can do further processing to transform that to the data structure you need.
This example doesn't deal with data coming back that isn't in the shape you expect. For example, if some author records are missing a name, you will end up with a key undefined. And if there is no authors key or no posts key in the returned object you will get an exception.
So you have to decide how your program should behave in those cases. Should it explode? Or return an empty object? If you want it to continue with an empty object:
const authors = {};
$.getJSON( "http://sampleurl.com", data =>
if (data.posts && data.posts.authors) {
authors.forEach(author => {
const name = author.name || 'unknown';
authors[author.name] = authors[author.name] || []
authors[author.name].push(author)
});
} else {
console.log('Warning! Data from API did not contain posts.authors!')
}
);
Note that neither of these examples deal with the AJAX call itself failing. For that you need to chain a .fail() handler:
const authors = {};
const url = "http://sampleurl.com"
$.getJSON( url, data =>
if (data.posts && data.posts.authors) {
authors.forEach(author => {
const name = author.name || 'unknown';
authors[author.name] = authors[author.name] || []
authors[author.name].push(author)
});
} else {
console.log('Warning! Data from API did not contain posts.authors!')
}
).fail(res => console.log(`Ajax call to ${url} failed with message ${res.responseText}!`);
10% of programming is getting it to work. The other 90% is coding for what happens when it doesn't work.

Transform Request to Autoquery friendly

We are working with a 3rd party grid (telerik kendo) that has paging/sorting/filtering built in. It will send the requests in a certain way when making the GET call and I'm trying to determine if there is a way to translate these requests to AutoQuery friendly requests.
Query string params
Sort Pattern:
sort[{0}][field] and sort[{0}][dir]
Filtering:
filter[filters][{0}][field]
filter[filters][{0}][operator]
filter[filters][{0}][value]
So this which is populated in the querystring:
filter[filters][0][field]
filter[filters][0][operator]
filter[filters][0][value]
would need to be translated to.
FieldName=1 // filter[filters][0][field]+filter[filters][0][operator]+filter[filters][0][value] in a nutshell (not exactly true)
Should I manipulate the querystring object in a plugin by removing the filters (or just adding the ones I need) ? Is there a better option here?
I'm not sure there is a clean way to do this on the kendo side either.
I will explain the two routes I'm going down, I hope to see a better answer.
First, I tried to modify the querystring in a request filter, but could not. I ended up having to run the autoqueries manually by getting the params and modifying them before calling AutoQuery.Execute. Something like this:
var requestparams = Request.ToAutoQueryParams();
var q = AutoQueryDb.CreateQuery(requestobject, requestparams);
AutoQueryDb.Execute(requestobject, q);
I wish there was a more global way to do this. The extension method just loops over all the querystring params and adds the ones that I need.
After doing the above work, I wasn't very happy with the result so I investigated doing it differently and ended up with the following:
Register the Kendo grid filter operations to their equivalent Service Stack auto query ones:
var aq = new AutoQueryFeature { MaxLimit = 100, EnableAutoQueryViewer=true };
aq.ImplicitConventions.Add("%neq", aq.ImplicitConventions["%NotEqualTo"]);
aq.ImplicitConventions.Add("%eq", "{Field} = {Value}");
Next, on the grid's read operation, we need to reformat the the querystring:
read: {
url: "/api/stuff?format=json&isGrid=true",
data: function (options) {
if (options.sort && options.sort.length > 0) {
options.OrderBy = (options.sort[0].dir == "desc" ? "-" : "") + options.sort[0].field;
}
if (options.filter && options.filter.filters.length > 0) {
for (var i = 0; i < options.filter.filters.length; i++) {
var f = options.filter.filters[i];
console.log(f);
options[f.field + f.operator] = f.value;
}
}
}
Now, the grid will send the operations in a Autoquery friendly manner.
I created an AutoQueryDataSource ts class that you may or may not find useful.
It's usage is along the lines of:
this.gridDataSource = AutoQueryKendoDataSource.getDefaultInstance<dtos.QueryDbSubclass, dtos.ListDefinition>('/api/autoQueryRoute', { orderByDesc: 'createdOn' });
export default class AutoQueryKendoDataSource<queryT extends dtos.QueryDb_1<T>, T> extends kendo.data.DataSource {
private constructor(options: kendo.data.DataSourceOptions = {}, public route?: string, public request?: queryT) {
super(options)
}
defer: ng.IDeferred<any>;
static exportToExcel(columns: kendo.ui.GridColumn[], dataSource: kendo.data.DataSource, filename: string) {
let rows = [{ cells: columns.map(d => { return { value: d.field }; }) }];
dataSource.fetch(function () {
var data = this.data();
for (var i = 0; i < data.length; i++) {
//push single row for every record
rows.push({
cells: _.map(columns, d => { return { value: data[i][d.field] } })
})
}
var workbook = new kendo.ooxml.Workbook({
sheets: [
{
columns: _.map(columns, d => { return { autoWidth: true } }),
// Title of the sheet
title: filename,
// Rows of the sheet
rows: rows
}
]
});
//save the file as Excel file with extension xlsx
kendo.saveAs({ dataURI: workbook.toDataURL(), fileName: filename });
})
}
static getDefaultInstance<queryT extends dtos.QueryDb_1<T>, T>(route: string, request: queryT, $q?: ng.IQService, model?: any) {
let sortInfo: {
orderBy?: string,
orderByDesc?: string,
skip?: number
} = {
};
let opts = {
transport: {
read: {
url: route,
dataType: 'json',
data: request
},
parameterMap: (data, type) => {
if (type == 'read') {
if (data.sort) {
data.sort.forEach((s: any) => {
if (s.field.indexOf('.') > -1) {
var arr = _.split(s.field, '.')
s.field = arr[arr.length - 1];
}
})
}//for autoquery to work, need only field names not entity names.
sortInfo = {
orderByDesc: _.join(_.map(_.filter(data.sort, (s: any) => s.dir == 'desc'), 'field'), ','),
orderBy: _.join(_.map(_.filter(data.sort, (s: any) => s.dir == 'asc'), 'field'), ','),
skip: 0
}
if (data.page)
sortInfo.skip = (data.page - 1) * data.pageSize,
_.extend(data, request);
//override sorting if done via grid
if (sortInfo.orderByDesc) {
(<any>data).orderByDesc = sortInfo.orderByDesc;
(<any>data).orderBy = null;
}
if (sortInfo.orderBy) {
(<any>data).orderBy = sortInfo.orderBy;
(<any>data).orderByDesc = null;
}
(<any>data).skip = sortInfo.skip;
return data;
}
return data;
},
},
requestStart: (e: kendo.data.DataSourceRequestStartEvent) => {
let ds = <AutoQueryKendoDataSource<queryT, T>>e.sender;
if ($q)
ds.defer = $q.defer();
},
requestEnd: (e: kendo.data.DataSourceRequestEndEvent) => {
new DatesToStringsService().convert(e.response);
let ds = <AutoQueryKendoDataSource<queryT, T>>e.sender;
if (ds.defer)
ds.defer.resolve();
},
schema: {
data: (response: dtos.QueryResponse<T>) => {
return response.results;
},
type: 'json',
total: 'total',
model: model
},
pageSize: request.take || 40,
page: 1,
serverPaging: true,
serverSorting: true
}
let ds = new AutoQueryKendoDataSource<queryT, T>(opts, route, request);
return ds;
}
}

KendoGrid - display datasource as rows (not columns)

There is a way to display the dataSource as rows and not columns?
My dataSource has only one record with 30 properties. I would like to display the properties as rows (property1 => row1, property2 => row2, ...) instead of columns (property1 => column1, property2 => column2, ....).
In fact, I would like to retrieve 3 records (each one with 30 properties), but the 3 records would be the colums and the common 30 properties would be the rows.
How can I do that?
This is what I did so far, but I'm not sure this is the right way.
var _dataSource = function () {
var dataSource = new kendo.data.DataSource({
transport: {
read: { // returns only 3 records
url: url,
dataType: "json"
}
},
schema : {
data: "data",
total: "total",
parse: function (data) {
console.log(data);
var dataArray = [];
var i = 0;
for (var property in data[0]) {
console.log(property);
console.log(data[0][property]);
var record = {
header: "",
headerId: "",
record1: "",
record2: "",
record3: ""
};
record.header = headerRows[i]; // array of header string (the 30 properties)
record.headerId = property;
record.record1 = data[0][property];
record.record2 = data[1][property];
record.record3 = data[2][property];
console.log(record);
dataArray.push(record);
i++;
}
return dataArray;
}
}
});
return dataSource;
};
But this gives the error: TypeError: e is undefined
Got it! So simple!
Instead of build the logic on schema.parse, I did it on schema.data.
Actually, it makes more sense on schema.data. Don't know why I was trying on schema.parse.

Read csv to object of object for d3 [datamaps]

I'm using datamaps and would like to be able to read the data from a csv file.
The data format that datamaps is expecting is the following:
var loadeddata = {
"JPN":{Rate:17.5,fillKey:"firstCat"},
"DNK":{Rate:16.6,fillKey:"secondCat"}
};
I would like to read a csv file of the following structure and transform it into the format that datamaps is expecting:
ISO, Rate, fillKey
JPN, 17.5, firstCat
DNK, 16.6, secondCat
My 'best attempt' was using the following code:
var csvloadeddata;
d3.csv("simpledata.csv", function (error, csv) {
if (error) return console.log("there was an error loading the csv: " + error);
console.log("there are " + csv.length + " elements in my csv set");
var nestFunction = d3.nest().key(function(d){return d.ISO;});
csvloadeddata = nestFunction.entries(
csv.map(function(d){
d.Rate = +d.Rate;
d.fillKey = d.fillKey;
return d;
})
);
console.log("there are " + csvloadeddata.length + " elements in my data");
});
But this code generates a variable 'csvloadeddata' that looks like this:
var csvloadeddata = [
{"key": "JPN", "values": { 0: {Rate:17.5, fillKey:"firstCat"}} },
{"key": "DNK", values : { 1: {Rate:16.6,fillKey:"secondCat"}} }
];
What am I doing wrong?
Found the answer myself. If somebody is interested – this is what I ended up using:
<script>
d3.csv("simpledata.csv", function(error, csvdata1) {
globalcsvdata1 = csvdata1;
for (var i=0;i<csvdata1.length;i++)
{
globalcsvdata1[ globalcsvdata1[i].ISO] = globalcsvdata1[i] ;
//console.log(globalcsvdata1[i]);
delete globalcsvdata1[i].ISO;
delete globalcsvdata1[i] ;
}
myMap.updateChoropleth(globalcsvdata1);
}
);
var myMap = new Datamap({
element: document.getElementById('map'),
scope: 'world',
geographyConfig: {
popupOnHover: true,
highlightOnHover: false
},
fills: {
'AA': '#1f77b4',
'BB': '#9467bd',
defaultFill: 'grey'
}
});
</script>
</body>
The csv has the following structure:
ISO,fillKey
RUS,AA
USA,BB
Here is a working example: http://www.explainingprogress.com/wp-content/uploads/datamaps/uploaded_gdpPerCapita2011_PWTrgdpe/gdpPerCapita2011_PWTrgdpe.html

d3 - reading JSON data instead of CSV file

I'm trying to read data into my calendar visualisation using JSON. At
the moment it works great using a CSV file:
d3.csv("RSAtest.csv", function(csv) {
var data = d3.nest()
.key(function(d) { return d.date; })
.rollup(function(d) { return d[0].total; })
.map(csv);
rect.filter(function(d) { return d in data; })
.attr("class", function(d) { return "day q" + color(data[d]) +
"-9"; })
.select("title")
.text(function(d) { return d + ": " + data[d]; });
});
It reads the following CSV data:
date,total
2000-01-01,11
2000-01-02,13
.
.
.etc
Any pointers on how I can read the following JSON data instead:
{"2000-01-01":19,"2000-01-02":11......etc}
I tried the following but it not working for me (datareadCal.php spits
out the JSON for me):
d3.json("datareadCal.php", function(json) {
var data = d3.nest()
.key(function(d) { return d.Key; })
.rollup(function(d) { return d[0].Value; })
.map(json);
thanks
You can use d3.entries() to turn an object literal into an array of key/value pairs:
var countsByDate = {'2000-01-01': 10, ...};
var dateCounts = d3.entries(countsByDate);
console.log(JSON.stringify(dateCounts[0])); // {"key": "2000-01-01", "value": 10}
One thing you'll notice, though, is that the resulting array isn't properly sorted. You can sort them by key ascending like so:
dateCounts = dateCounts.sort(function(a, b) {
return d3.ascending(a.key, b.key);
});
Turn your .json file into a .js file that is included in your html file. Inside your .js file have:
var countsByDate = {'2000-01-01':10,...};
Then you can reference countsByDate....no need to read from a file per se.
And you can read it with:
var data = d3.nest()
.key(function(d) { return d.Key; })
.entries(json);
As an aside....d3.js says it's better to set your json up as:
var countsByDate = [
{Date: '2000-01-01', Total: '10'},
{Date: '2000-01-02', Total: '11'},
];