I want to convert the following line of a file into JSON, I want to save that into an mongoose schema.
>HWI-ST700660_96:2:1101:1455:2154#5#0/1
GAA…..GAATG
Should be:
{“>HWI-ST700660_96:2:1101:1455:2154#5#0/1”: “GAA…..GAATG”}
I have tried several options, one sample below, but no success, any suggestion?
const parser = require("csv-parse/lib/sync");//import parser
const fs = require("fs");//import file reader
const path = require("path");//for join paths
const sourceData = fs.readFileSync(path.join(__dirname, "Reads.txt"), "utf8");//read the file, locally stored
console.log(sourceData);//print out for checking
const documents = parser(sourceData);//parsing, it works for other situations I have tested, in a column like data
console.log(documents);//printing out
This code give me an output as following:
[ [ '>HWI-ST700660_96:2:1101:1455:2154#5#0/1' ],
[ 'GAATGGAATGAAATGGATAGGAATGGAATGGAATGGAATGGATTGGAATGGATTAGAATGGATTGGAATGGAATGAAATTAATTTGATTGGAATGGAATG' ],...
Similar question: fasta file reading python
Because you are using the default config of the parser, it does simply output arrays of arrays in that configuration.
If you want to receive objects you will need to give the parser some options (columns) first. Take a look at the doc.
When using the sync parsing mode (like you are using) you can provide options like this:
const documents = parse(sourceData, {columns: true})
columns:true will infer the column names from the first line of the input csv.
Related
So say you have a variable "data" data is then stored in a JSON format and converted to string using JSON.stringify(data). How would you then take this JSON data and save it as file x at location /pc/locationX ?
As mentioned by the other comments, you can't choose the download path target. However, you can use the following method to allow your users to download the JSON file. You might have probably guessed you can use the following method since you already know you can use JSON.stringify().
downloadJson(jsonObj, fileName){
const data = 'data:text/json;charset=utf-8,' + encodeURIComponent(JSON.stringify(jsonObj));
const a = document.createElement('a');
a.setAttribute('href', data);
a.setAttribute('download', `${fileName}.json`);
// You might need to uncomment the next line for Firefox
// document.body.appendChild(a);
a.click();
a.remove();
}
localStorage.setItem('user', JSON.stringify(user)); Then to retrieve it from the store and convert to an object again:
var user = JSON.parse(localStorage.getItem('user')); If we need to delete all entries of the store we can simply do:
localStorage.clear();
The code above works however I would like to load the searchStrings array from a JSON file.
My goal is to have the json file on a shared drive so my coworkers are able to edit the names.
You can use following:
var someObject = require('./somefile.json')
JSON can be imported via require just like Node modules. (See Ben Nadel's explanation.)
You would generally want to store it as a global variable, rather than re-loading it on every keyup event. So if the JSON is saved as watchlist.json, you could add
var watchlist = require('./watchlist');
at the top of your code. Then the search command could be written (without needing the for loop) as:
kitten = kitten.toLowerCase();
if (watchlist.entities.indexOf(kitten) !== -1) {
alert('potential watchlist');
}
In the latest versions of Node you can simply use imports!
CommonJs:
const jsonContent = require('path/to/json/file.json')
ES6:
import jsonContent from 'path/to/json/file.json'
You can also import JSon files dinamically, take the following example:
if (condition) {
const jsonContent = require('path/to/json/file.json')
// Use JSon content as you prefer!
}
that way, you only load your JSon file if you really need it and your code will score better performances!
Do you prefer an old school approach?
ES6:
import fs from 'fs' // or const fs = require('fs') in CommonJs
const JSonFile = fs.readFileSync('path/to/json/file.json', 'utf8')
const toObject = JSON.parse(JSonFile)
// read properties from `toObject` constant!
Hope it helps :)
I am completely noobie in node.js and I am a university researcher.
I have a lot of files in XML and JSON format, I have a lot of folders and files, I need read it and create one standard (in JSON/CSV) and finally loading it in MySQL database, any tips? do you know npm packages for it or completely solutions?
There are many packages for converting xml to json like xmljs, xml2json, etc..
But looks like you need transforming to standard format for inserting into database as well.
I have this problem myself and i wrote camaro for this purpose: convert and transform xml to json
All i have to do is to write an output template that i would like my xml to be converted too using xpath syntax like below
Of course you can just flatten all the attribute you need ; the below is just example of what camaro can do.
const transform = require('camaro')
const fs = require('fs')
const xml = fs.readFileSync('examples/ean.xml', 'utf-8')
const template = {
cache_key: '/HotelListResponse/cacheKey',
hotels: ['//HotelSummary', {
hotel_id: 'hotelId',
name: 'name',
rooms: ['RoomRateDetailsList/RoomRateDetails', {
rates: ['RateInfos/RateInfo', {
currency: 'ChargeableRateInfo/#currencyCode',
non_refundable: 'boolean(nonRefundable = "true")',
price: 'number(ChargeableRateInfo/#total)'
}],
room_name: 'roomDescription',
room_type_id: 'roomTypeCode'
}]
}],
session_id: '/HotelListResponse/customerSessionId'
}
const result = transform(xml, template)
Thanks #tuananh
Is it possible to use attr of a tag as key or value? Some of my data value i have in attr like <name type="sub-name">Hotel name</name> or <hotelName id='12345'>Hotel name </name> it is possible to do in camaro?
And I have doesn't pure data like <name type="sub-name">Hotel <nameHotel>Hilton</nameHotel></name> It is possible to remove tag nameHotel and save it as {name: 'Hotel Hilton'}
Thank you very much
I'm using D3.js to load csv file. It should look like this:
id,
a,
b,
But the csv is created inside my code, so I store it in a variable like this:
var flare = 'id,\na,\nb,\n'
However, the script does not work:
d3.csv(flare, function(error, data){
if(error) throw error;
});
How to solve the problem?
Depending on the version of D3 you are going to use, you have to choose the appropriate function:
v3.x
In versions 3.x d3.csv.parse() is what you are looking for:
Parses the specified string, which is the contents of a CSV file, returning an array of objects representing the parsed rows.
For your example this would be
var flare = 'id,\na,\nb,\n';
var data = d3.csv.parse(flare);
v4+
For version 4 and above the CSV parser has become part of the d3-dsv module. The function is now named d3.csvParse().
var flare = 'id,\na,\nb,\n';
var data = d3.csvParse(flare);
I have a large json file, its is Newline-delimited JSON, where multiple standard JSON objects are delimited by extra newlines, e.g.
{'name':'1','age':5}
{'name':'2','age':3}
{'name':'3','age':6}
I am now using JSONStream in node.js to parse a large json file, the reason I use JSONStream is because it is based on stream.
However,both parse syntax in the example can't help me to parse this json file with separated JSON in each line
var parser = JSONStream.parse(**['rows', true]**);
var parser = JSONStream.parse([**/./**]);
Can someone help me with that
Warning: Since this answer was written, the author of the JSONStream library removed the emit root event functionality, apparently to fix a memory leak.
Future users of this library, you can use the 0.x.x versions if you need the emit root functionality.
Below is the unmodified original answer:
From the readme:
JSONStream.parse(path)
path should be an array of property names, RegExps, booleans, and/or functions. Any object that matches the path will be emitted as 'data'.
A 'root' event is emitted when all data has been received. The 'root' event passes the root object & the count of matched objects.
In your case, since you want to get back the JSON objects as opposed to specific properties, you will be using the 'root' event and you don't need to specify a path.
Your code might look something like this:
var fs = require('fs'),
JSONStream = require('JSONStream');
var stream = fs.createReadStream('data.json', {encoding: 'utf8'}),
parser = JSONStream.parse();
stream.pipe(parser);
parser.on('root', function (obj) {
console.log(obj); // whatever you will do with each JSON object
});
JSONstream is intended for parsing a single huge JSON object, not many JSON objects. You want to split the stream at newlines, then parse them as JSON.
The NPM package split claims to do this splitting, and even has a feature to parse the JSON lines for you.
If your file is not enough large here is an easy, but not performant solution:
const fs = require('fs');
let rawdata = fs.readFileSync('fileName.json');
let convertedData = String(rawdata)
.replace(/\n/gi, ',')
.slice(0, -1);
let JsonData= JSON.parse(`[${convertedData}]`);
I created a package #jsonlines/core which parses jsonlines as object stream.
You can try the following code:
npm install #jsonlines/core
const fs = require("fs");
const { parse } = require("#jsonlines/core");
// create a duplex stream which parse input as lines of json
const parseStream = parse();
// read from the file and pipe into the parseStream
fs.createReadStream(yourLargeJsonLinesFilePath).pipe(parseStream);
// consume the parsed objects by listening to data event
parseStream.on("data", (value) => {
console.log(value);
});
Note that parseStream is a standard node duplex stream.
So you can also use for await ... of or other ways to consume it.
Here's another solution for when the file is small enough to fit into memory. It reads the whole file in one go, converts it into an array by splitting it at the newlines (removing the blank line at the end), and then parses each line.
import fs from "fs";
const parsed = fs
.readFileSync(`data.jsonl`, `utf8`)
.split(`\n`)
.slice(0, -1)
.map(JSON.parse)