Export CSV/XLS from Meteor application - csv

What is the simplest way to export data to CSV from Meteor?
How to generate CSV?
What I've tried
Add Npm package:
$ meteor add meteorhacks:npm
Add Node.js CSV suite:
// packages.json
{
"csv": "0.4.0",
}
Add Iron Router package:
$ meteor add iron:router
Configure router on server:
// server/router.coffee
Router.map ->
#route 'exportCSV',
where: 'server'
path: '/export-csv/:id'
onAfterAction: ->
data = ... // Generated CSV
filename = 'filename.csv'
headers =
'Content-type': 'text/csv'
'Content-Disposition': 'attachment; filename=' + filename
#response.writeHead 200, headers
#response.end file

For my use-case, all of the data was already published to the client. So I decided to generate the file there, using FileSaver.
Here's a basic class to build a csv by calling addRow(), then call download('xxx.csv') to have the user download the file.
class CsvBuilder
constructor: ->
#content = []
line: ->
row = [].slice.call(arguments)
if row.length == 0
#addBlankRow()
else
#addRow(row)
return
addRow: (row)->
#content.push(#row2Csv(row), "\n")
return
addBlankRow: ->
#content.push("", "\n")
return
row2Csv: (row)->
d = ''
for cell in row
d += '"' + (cell + "").replace(/"/g, '""') + '",'
return d
download: (filename)->
try
isFileSaverSupported = !!new FileSaver.Blob()
unless isFileSaverSupported
window.alert("Save as CSV not supported");
return
contentBlob = new FileSaver.Blob(#content, {type: "text/csv;charset=utf-8"})
FileSaver.saveAs(contentBlob, filename)
return
destroy: ->
#content = []

Using the FileSave.js from eligrey/Filesaver and save in lib folder in you client
'click .btnRawCSV' : function() {
var rawData = Gifts.find({
receiptdate:{
$gte: Session.get("giftsStartDate"),
$lte: Session.get("giftsEndDate")
}
}).fetch();
csv = json2csv( rawData, true, true );
var blob = new Blob([csv], {type: "text/plain;charset=utf-8;",});
saveAs(blob, "rawgifts.csv");
},

Related

.dat (unknown notation) to json format

I am working in an open-source game called Argentum Online, you can check out our code here https://github.com/ao-libre
The problem I am facing is it has a lot of files with the extension .dat with this format:
[NPC12]
Name=Sastre
Desc=¡Hola forastero! Soy el Sastre de Ullathorpe, Bienvenido!
Head=9
Body=50
Heading=3
Movement=1
Attackable=0
Comercia=1
TipoItems=3
Hostil=0
GiveEXP=0
GiveGLD=0
InvReSpawn=0
NpcType=0
Alineacion=0
DEF=0
MaxHit=0
MaxHp=0
[NPC19]
Name=Sastre
Desc=¡Bienvenida Viajera! Tengo hermosas vestimentas para ofrecerte...
Head=70
Body=80
Heading=3
Movement=1
Attackable=0
Comercia=1
TipoItems=3
Hostil=0
GiveEXP=0
GiveGLD=0
I would like to know if this kind of parsing has a proper name and what is a good way to convert it to json?
After read the comment above from #mx0.
The format is called INI Format
https://en.wikipedia.org/wiki/INI_file
Here an answer for the transformation between INI to JSON
Npm Module:
https://github.com/npm/ini
Or
ES6 Snippet
let ini2Obj = {};
const keyValuePair = kvStr => {
const kvPair = kvStr.split('=').map( val => val.trim() );
return { key: kvPair[0], value: kvPair[1] };
};
const result = document.querySelector("#results");
document.querySelector( '#inifile' ).textContent
.split( /\n/ ) // split lines
.map( line => line.replace( /^\s+|\r/g, "" ) ) // cleanup whitespace
.forEach( line => { // convert to object
line = line.trim();
if ( line.startsWith('#') || line.startsWith(';') ) { return false; }
if ( line.length ) {
if ( /^\[/.test(line) ) {
this.currentKey = line.replace(/\[|\]/g,'');
ini2Obj[this.currentKey] = {};
} else if ( this.currentKey.length ) {
const kvPair = keyValuePair(line);
ini2Obj[this.currentKey][kvPair.key] = kvPair.value;
}
}
}, {currentKey: ''} );
result.textContent +=
`**Check: ini2Obj['Slave_Settings:11'].ConfigurationfilePath = ${
ini2Obj['Slave_Settings:11'].ConfigurationfilePath}`;
result.textContent +=
`\n\n**The converted object (JSON-stringified)\n${
JSON.stringify(ini2Obj, null, ' ')}`;
Original answer:
Javascript library for convert .ini file to .json file (client side)

Kinesis Firehose putting JSON objects in S3 without seperator comma

Before sending the data I am using JSON.stringify to the data and it looks like this
{"data": [{"key1": value1, "key2": value2}, {"key1": value1, "key2": value2}]}
But once it passes through AWS API Gateway and Kinesis Firehose puts it to S3 it looks like this
{
"key1": value1,
"key2": value2
}{
"key1": value1,
"key2": value2
}
The seperator comma between the JSON objects are gone but I need it to process data properly.
Template in the API Gateway:
#set($root = $input.path('$'))
{
"DeliveryStreamName": "some-delivery-stream",
"Records": [
#foreach($r in $root.data)
#set($data = "{
""key1"": ""$r.value1"",
""key2"": ""$r.value2""
}")
{
"Data": "$util.base64Encode($data)"
}#if($foreach.hasNext),#end
#end
]
}
I had this same problem recently, and the only answers I was able to find were basically just to add line breaks ("\n") to the end of every JSON message whenever you posted them to the Kinesis stream, or to use a raw JSON decoder method of some sort that can process concatenated JSON objects without delimiters.
I posted a python code solution which can be found over here on a related Stack Overflow post:
https://stackoverflow.com/a/49417680/1546785
One approach you could consider is to configure data processing for your Kinesis Firehose delivery stream by adding a Lambda function as its data processor, which would be executed before finally delivering the data to the S3 bucket.
DeliveryStream:
...
Type: AWS::KinesisFirehose::DeliveryStream
Properties:
DeliveryStreamType: DirectPut
ExtendedS3DestinationConfiguration:
...
BucketARN: !GetAtt MyDeliveryBucket.Arn
ProcessingConfiguration:
Enabled: true
Processors:
- Parameters:
- ParameterName: LambdaArn
ParameterValue: !GetAtt MyTransformDataLambdaFunction.Arn
Type: Lambda
...
And in the Lambda function, ensure that '\n' is appended to the record's JSON string, see below the Lambda function myTransformData.ts in Node.js:
import {
FirehoseTransformationEvent,
FirehoseTransformationEventRecord,
FirehoseTransformationHandler,
FirehoseTransformationResult,
FirehoseTransformationResultRecord,
} from 'aws-lambda';
const createDroppedRecord = (
recordId: string
): FirehoseTransformationResultRecord => {
return {
recordId,
result: 'Dropped',
data: Buffer.from('').toString('base64'),
};
};
const processData = (
payloadStr: string,
record: FirehoseTransformationEventRecord
) => {
let jsonRecord;
// ...
// Process the orginal payload,
// And create the record in JSON
return jsonRecord;
};
const transformRecord = (
record: FirehoseTransformationEventRecord
): FirehoseTransformationResultRecord => {
try {
const payloadStr = Buffer.from(record.data, 'base64').toString();
const jsonRecord = processData(payloadStr, record);
if (!jsonRecord) {
console.error('Error creating json record');
return createDroppedRecord(record.recordId);
}
return {
recordId: record.recordId,
result: 'Ok',
// Ensure that '\n' is appended to the record's JSON string.
data: Buffer.from(JSON.stringify(jsonRecord) + '\n').toString('base64'),
};
} catch (error) {
console.error('Error processing record ${record.recordId}: ', error);
return createDroppedRecord(record.recordId);
}
};
const transformRecords = (
event: FirehoseTransformationEvent
): FirehoseTransformationResult => {
let records: FirehoseTransformationResultRecord[] = [];
for (const record of event.records) {
const transformed = transformRecord(record);
records.push(transformed);
}
return { records };
};
export const handler: FirehoseTransformationHandler = async (
event,
_context
) => {
const transformed = transformRecords(event);
return transformed;
};
Once the newline delimiter is in place, AWS services such as Athena will be able to work properly with the JSON record data in the S3 bucket, not just seeing the first JSON record only.
Once AWS Firehose dumps the JSON objects to s3, it's perfectly possible to read the individual JSON objects from the files.
Using Python you can use the raw_decode function from the json package
from json import JSONDecoder, JSONDecodeError
import re
import json
import boto3
NOT_WHITESPACE = re.compile(r'[^\s]')
def decode_stacked(document, pos=0, decoder=JSONDecoder()):
while True:
match = NOT_WHITESPACE.search(document, pos)
if not match:
return
pos = match.start()
try:
obj, pos = decoder.raw_decode(document, pos)
except JSONDecodeError:
# do something sensible if there's some error
raise
yield obj
s3 = boto3.resource('s3')
obj = s3.Object("my-bukcet", "my-firehose-json-key.json")
file_content = obj.get()['Body'].read()
for obj in decode_stacked(file_content):
print(json.dumps(obj))
# { "key1":value1,"key2":value2}
# { "key1":value1,"key2":value2}
source: https://stackoverflow.com/a/50384432/1771155
Using Glue / Pyspark you can use
import json
rdd = sc.textFile("s3a://my-bucket/my-firehose-file-containing-json-objects")
df = rdd.map(lambda x: json.loads(x)).toDF()
df.show()
source: https://stackoverflow.com/a/62984450/1771155
please use this code to solve your issue
__Author__ = "Soumil Nitin Shah"
import json
import boto3
import base64
class MyHasher(object):
def __init__(self, key):
self.key = key
def get(self):
keys = str(self.key).encode("UTF-8")
keys = base64.b64encode(keys)
keys = keys.decode("UTF-8")
return keys
def lambda_handler(event, context):
output = []
for record in event['records']:
payload = base64.b64decode(record['data'])
"""Get the payload from event bridge and just get data attr """""
serialize_payload = str(json.loads(payload)) + "\n"
hasherHelper = MyHasher(key=serialize_payload)
hash = hasherHelper.get()
output_record = {
'recordId': record['recordId'],
'result': 'Ok',
'data': hash
}
print("output_record", output_record)
output.append(output_record)
return {'records': output}

Mongoose Import Json Export with IDs

I want to import for unit tests a fixed limited subset of my actual database. So I exported my database with the mongo shell into mydata.json. Now I want to read exactly this Array of JSON files into my db, with keeping the ids.
1st: I already fail on reading the JSON Export, how to fix this?
if !db?
mongoose.connect(configDB.url,{auth:{authdb:configDB.authdb}}, (err)->
if (err)
console.log(err)
)
db = mongoose.connection
db.on('error', console.error.bind(console, 'connection error:'))
db.once('open', () ->
console.log "Database established"
#Delete all data and seed new data
SomeModel = require(applicationDir + 'node/models/some.model.js')
SomeModel.remove({}, (err) ->
console.log('collection somes removed seeding new one')
fs.readFile(__dirname + '/../../mongo/seed-for-test/somes.json','utf-8', (err,fileData) ->
console.log typeof fileData
fileData = JSON.parse(fileData)
console.log fileData.length
# new SomeModel(fileData).save((err) ->
# if err?
# return console.log err
# console.log('somes saved')
# )
)
)
)
error
string
undefined:2
{ "_id" : { "$oid" : "551d82e30287751fa2f2dfb2" }, "prooven" : true, "title" :
^
SyntaxError: Unexpected token {
at Object.parse (native)
at /Users/MasterG/Desktop/PROJEKTE/lek/specs/backend/mongo.service.spec.js:37:27
at fs.js:336:14
at /Users/MasterG/Desktop/PROJEKTE/lek/node_modules/wiredep/node_modules/bower-config/node_modules/graceful-fs/graceful-fs.js:104:5
at FSReqWrap.oncomplete (fs.js:99:15)
2nd
If I uncomment the lower part will it work or is there anything else I need to do.
EDIT
The export does not give back a valid array of json objects. The --jsonArray flag has to be used when exporting.
This works with exporting with the flag --jsonArray but it looks wrong to me. Also the .json file is not formated as nicely as before. And I need to add some extra logic to check if the last entry was saved.
ObjectId = require('mongoose').Types.ObjectId
SomeModel = require(applicationDir + 'node/models/some.model.js')
if !db?
mongoose.connect(configDB.url,{auth:{authdb:configDB.authdb}}, (err)->
if (err)
console.log(err)
)
db = mongoose.connection
db.on('error', console.error.bind(console, 'connection error:'))
db.once('open', () ->
console.log "Database established"
#Delete all data and seed new data
SomeModel.remove({}, (err) ->
console.log('collection somes removed seeding new one')
fs.readFile(__dirname + '/../../mongo/seed-for-test/somes.json','utf-8', (err,fileData) ->
fileData = JSON.parse(fileData)
for singleFileData in fileData
singleFileData._id = new ObjectId(singleFileData._id.$oid)
new SomeModel(singleFileData).save((err) ->
if err?
return console.log err
console.log('somes saved')
)
)
)
)

Gulp Yaml Front Matter to JSON add File Name

I'm not sure what is the best way to go about this.
I would like to get the yaml front matter from a markdown file convert it to json while adding the name of the file and then combine them in a single array json file.
E.g. the files bananas.md and apples.md,
---
title: Bananas
type: yellow
count:
- 1
- 2
---
# My Markdown File
apples.md:
---
title: Apples
type: red
count:
- 3
- 4
---
# My Markdown File 2
converts to all.json:
[{"title":"Bananas","type":"yellow","count":[1,2],"file":"bananas"},
{"title":"Apples","type":"red","count":[3,4],"file":"apples"}]
Of course, there wouldn't be a return as it would be compact.
I've found some gulp plugins but it doesn't seem any of them do exactly what I need, even combined, unless I'm missing something.
Update, I created the plugin gulp-pluck which vastly simplifies the process.
Here's how it works:
var gulp = require('gulp');
var data = require('gulp-data');
var pluck = require('gulp-pluck');
var frontMatter = require('gulp-front-matter');
gulp.task('front-matter-to-json', function(){
return gulp.src('./posts/*.md')
.pipe(frontMatter({property: 'meta'}))
.pipe(data(function(file){
file.meta.path = file.path;
}))
.pipe(pluck('meta', 'posts-metadata.json'))
.pipe(data(function(file){
file.contents = new Buffer(JSON.stringify(file.meta))
}))
.pipe(gulp.dest('dist'))
})
End Update
OK, took the time to figure this out. Gulp needs a built-in reduce function! (Maybe I'll work on that some time.)
Dependencies include: gulp, gulp-front-matter, gulp-filter, event-stream, stream-reduce, and gulp-rename.
Written in LiveScript:
gulp.task 'concatYaml' ->
devDest = './dev/public/'
gulp.src './src/posts/*.md'
.pipe filter posted
.pipe front-matter {property: 'meta'}
.pipe es.map (file, cb) ->
file.meta.name = path.basename file.path
file.meta.url = toUrlPath file.meta.name
cb null, file
.pipe reduce ((acc, file) ->
| acc =>
acc.meta.push file.meta
acc
| _ =>
acc = file
acc.meta = [file.meta]
acc
), void
.pipe es.map (file, cb) ->
file.contents = new Buffer JSON.stringify file.meta
cb null, file
.pipe rename 'posts.json'
.pipe gulp.dest devDest
And the JavaScript equivalent:
gulp.task('concatYaml', function(){
var devDest;
devDest = './dev/public/';
return gulp.src('./src/posts/*.md')
.pipe(filter(posted))
.pipe(frontMatter({ property: 'meta' }))
.pipe(es.map(function(file, cb){
file.meta.name = path.basename(file.path);
file.meta.url = toUrlPath(file.meta.name);
return cb(null, file);
}))
.pipe(reduce(function(acc, file){
switch (false) {
case !acc:
acc.meta.push(file.meta);
return acc;
default:
acc = file;
acc.meta = [file.meta];
return acc;
}
}, void 8))
.pipe(es.map(function(file, cb){
file.contents = new Buffer(JSON.stringify(file.meta));
return cb(null, file);
}))
.pipe(rename('posts.json'))
.pipe(gulp.dest(devDest));
});

d3 - reading JSON data instead of CSV file

I'm trying to read data into my calendar visualisation using JSON. At
the moment it works great using a CSV file:
d3.csv("RSAtest.csv", function(csv) {
var data = d3.nest()
.key(function(d) { return d.date; })
.rollup(function(d) { return d[0].total; })
.map(csv);
rect.filter(function(d) { return d in data; })
.attr("class", function(d) { return "day q" + color(data[d]) +
"-9"; })
.select("title")
.text(function(d) { return d + ": " + data[d]; });
});
It reads the following CSV data:
date,total
2000-01-01,11
2000-01-02,13
.
.
.etc
Any pointers on how I can read the following JSON data instead:
{"2000-01-01":19,"2000-01-02":11......etc}
I tried the following but it not working for me (datareadCal.php spits
out the JSON for me):
d3.json("datareadCal.php", function(json) {
var data = d3.nest()
.key(function(d) { return d.Key; })
.rollup(function(d) { return d[0].Value; })
.map(json);
thanks
You can use d3.entries() to turn an object literal into an array of key/value pairs:
var countsByDate = {'2000-01-01': 10, ...};
var dateCounts = d3.entries(countsByDate);
console.log(JSON.stringify(dateCounts[0])); // {"key": "2000-01-01", "value": 10}
One thing you'll notice, though, is that the resulting array isn't properly sorted. You can sort them by key ascending like so:
dateCounts = dateCounts.sort(function(a, b) {
return d3.ascending(a.key, b.key);
});
Turn your .json file into a .js file that is included in your html file. Inside your .js file have:
var countsByDate = {'2000-01-01':10,...};
Then you can reference countsByDate....no need to read from a file per se.
And you can read it with:
var data = d3.nest()
.key(function(d) { return d.Key; })
.entries(json);
As an aside....d3.js says it's better to set your json up as:
var countsByDate = [
{Date: '2000-01-01', Total: '10'},
{Date: '2000-01-02', Total: '11'},
];