Load Compressed csv data in d3 for page optimization - csv

I am having a 50MB csv data, is there any possibility i can compress the data to load
d3.js/ dc.js charts, now the page is too slow i would like to optimise it.. any help is much appreciated
Thanks in advance

I think it would be the best to implement a lazy loading solution. The idea is simple: you create a small, say 2MB CSV file and render your visualization using it. At the same time you start loading your full 50MB CSV.
Here is a small snippet:
DS = {} // your app holder for keeping global scope clean
d3.csv('data/small.csv', function(err, smallCSV) {
// Start loading big file immediately
d3.csv('data/big.csv', function(err, bigCSV) {
DS.data = bigCSV // when big data is loaded it replaces old partial data
DS.drawViz() // redraw viz
})
// This portion of code also starts immediately, while big file is still loading
DS.data = smallCSV
DS.drawViz() // function which has all your d3 code and uses DS.data inside
})
The change from small to big could be done in such way that user would have no clue, that something happened in the background. Consider this example where quite big data file is loaded and you can feel the lag at start. This app could load much faster if data would be loaded in two rounds.

That's a lot of data; give us a sample of the first couple rows. What are you doing with it, and how much of it affects what's on screen? Where does the csv come from (i.e., local or web service)?
If it's a matter of downloading the resource, depending on how common and large the values are, you may be able to refactor them into 1-byte keys with definitions pre-loaded (hash maps are O(1) access). Also if you're using a large amount of numerical data, perhaps a different number space (i.e., something that uses less characters than base 10) can shave some bytes off the final size since the CSV values are strings.
It sounds like CSV may not be the way to go, though, especially if your CSV is mostly unique strings or certain numerical data that won't benefit from the above optimizations. If you're loading the CSV from a web service, you could change it so that certain chunks are returned via some passed key (or handle it smarter server-side). So you would load only what you need at any given time, and probably cache it.
Finally, you could schedule multiple async calls to load the the whole thing in small chunks similar to what was suggested by leakyMirror. Since it would probably make most sense to use a lot of chunks, you'd want to do it with code (instead of typing all of those callbacks), and use an async event scheduler. I know there's a popular async library (https://github.com/caolan/async) that has a bunch of ways to do this, or you can write your own callback scheduler.

Related

Is d3.json slower than other methods for JSON file reads?

So I have a .json file of around 33 MB. I am using d3.json() to read the file, but it takes significant time (around 1.5-2 seconds), which is a lot since I need to updating my parameters multiple times (to render visualizations using d3 itself).
(I only need to read the file once though, but I haven't figured out a way to maintain it as a global variable, if that's even a good idea.)
Any suggestions? Should I be setting up a backend for this?
Okay, I loaded up the data as a JS variable, seems to work for my purpose.

searching in html/txt without loading it into program [duplicate]

I have a FindFile routine in my program which will list files, but if the "Containing Text" field is filled in, then it should only list files containing that text.
If the "Containing Text" field is entered, then I search each file found for the text. My current method of doing that is:
var
FileContents: TStringlist;
begin
FileContents.LoadFromFile(Filepath);
if Pos(TextToFind, FileContents.Text) = 0 then
Found := false
else
Found := true;
The above code is simple, and it generally works okay. But it has two problems:
It fails for very large files (e.g. 300 MB)
I feel it could be faster. It isn't bad, but why wait 10 minutes searching through 1000 files, if there might be a simple way to speed it up a bit?
I need this to work for Delphi 2009 and to search text files that may or may not be Unicode. It only needs to work for text files.
So how can I speed this search up and also make it work for very large files?
Bonus: I would also want to allow an "ignore case" option. That's a tougher one to make efficient. Any ideas?
Solution:
Well, mghie pointed out my earlier question How Can I Efficiently Read The First Few Lines of Many Files in Delphi, and as I answered, it was different and didn't provide the solution.
But he got me thinking that I had done this before and I had. I built a block reading routine for large files that breaks it into 32 MB blocks. I use that to read the input file of my program which can be huge. The routine works fine and fast. So step one is to do the same for these files I am looking through.
So now the question was how to efficiently search within those blocks. Well I did have a previous question on that topic: Is There An Efficient Whole Word Search Function in Delphi? and RRUZ pointed out the SearchBuf routine to me.
That solves the "bonus" as well, because SearchBuf has options which include Whole Word Search (the answer to that question) and MatchCase/noMatchCase (the answer to the bonus).
So I'm off and running. Thanks once again SO community.
The best approach here is probably to use memory mapped files.
First you need a file handle, use the CreateFile windows API function for that.
Then pass that to CreateFileMapping to get a file mapping handle. Finally use MapViewOfFile to map the file into memory.
To handle large files, MapViewOfFile is able to map only a certain range into memory, so you can e.g. map the first 32MB, then use UnmapViewOfFile to unmap it followed by a MapViewOfFile for the next 32MB and so on. (EDIT: as was pointed out below, make sure that the blocks you map this way overlap by a multiple of 4kb, and at least as much as the length of the text you are searching for, so that you are not overlooking any text which might be split at the block boundary)
To do the actual searching once the (part of) the file is mapped into memory, you can make a copy of the source for StrPosLen from SysUtils.pas (it's unfortunately defined in the implementation section only and not exposed in the interface). Leave one copy as is and make another copy, replacing Wide with Ansi every time. Also, if you want to be able to search in binary files which might contain embedded #0's, you can remove the (Str1[I] <> #0) and part.
Either find a way to identify if a file is ANSI or Unicode, or simply call both the Ansi and Unicode version on each mapped part of the file.
Once you are done with each file, make sure to call CloseHandle first on the file mapping handle and then on the file handling. (And don't forget to call UnmapViewOfFile first).
EDIT:
A big advantage of using memory mapped files instead of using e.g. a TFileStream to read the file into memory in blocks is that the bytes will only end up in memory once.
Normally, on file access, first Windows reads the bytes into the OS file cache. Then copies them from there into the application memory.
If you use memory mapped files, the OS can directly map the physical pages from the OS file cache into the address space of the application without making another copy (reducing the time needed for making the copy and halfing memory usage).
Bonus Answer: By calling StrLIComp instead of StrLComp you can do a case insensitive search.
If you are looking for text string searches, look for the Boyer-Moore search algorithm. It uses memory mapped files and a really fast search engine. The is some delphi units around that contain implementations of this algorithm.
To give you an idea of the speed - i currently search through 10-20MB files and it takes in the order of milliseconds.
Oh just read that it might be unicode - not sure if it supports that - but definately look down this path.
This is a problem connected with your previous question How Can I Efficiently Read The First Few Lines of Many Files in Delphi, and the same answers apply. If you don't read the files completely but in blocks then large files won't pose a problem. There's also a big speed-up to be had for files containing the text, in that you should cancel the search upon the first match. Currently you read the whole files even when the text to be found is in the first few lines.
May I suggest a component ? If yes I would recommend ATStreamSearch.
It handles ANSI and UNICODE (and even EBCDIC and Korean and more).
Or the class TUTBMSearch from the JclUnicode (Jedi-jcl). It was mainly written by Mike Lischke (VirtualTreeview). It uses a tuned Boyer-Moore algo that ensure speed. The bad point in your case, is that is fully works in unicode (widestrings) so the trans-typing from String to Widestring risk to be penalizing.
It depends on what kind of data yre you going to search with it, in order for you to achieve a real efficient results you will need to let your programm parse the interesting directories including all files in there, and keep the data in a database which you can access each time for a specific word in a specific list of files which can be generated up to the searching path. A Database statement can provide you results in milliseconds.
The Issue is that you will have to let it run and parse all files after the installation, which may take even more than 1 hour up to the amount of data you wish to parse.
This Database should be updated eachtime your programm starts, this can be done by comparing the MD5-Value of each file if it was changed, so you dont have to parse all your files each time.
If this way of working can be interesting if you have all your data in a constant place and you analyse data in the same files more than each time totally new files, some code analyser work like this and they are real efficient. So you invest some time on parsing and saving intresting data and you can jump to the exact place where a searching word appears and provide a list of all places it appears on in a very short time.
If the files are to be searched multiple times, it could be a good idea to use a word index.
This is called "Full Text Search".
It will be slower the first time (text must be parsed and indexes must be created), but any future search will be immediate: in short, it will use only the indexes, and not read all text again.
You have the exact parser you need in The Delphi Magazine Issue 78, February 2002:
"Algorithms Alfresco: Ask A Thousand Times
Julian Bucknall discusses word indexing and document searches: if you want to know how Google works its magic this is the page to turn to."
There are several FTS implementation for Delphi:
Rubicon
Mutis
ColiGet
Google is your friend..
I'd like to add that most DB have an embedded FTS engine. SQLite3 even has a very small but efficient implementation, with page ranking and such.
We provide direct access from Delphi, with ORM classes, to this Full Text Search engine, named FTS3/FTS4.

How to covert a large JSON file into XML?

I have a large JSON file, its size is 5.09 GB. I want to convert it to an XML file. I tried online converters but the file is too large for them. Does anyone know how to to do that?
The typical way to process XML as well as JSON files is to load these files completely into memory. Then you have a so called DOM which allows you various kinds of data processing. But neither XML nor JSON are really designed for storing that much data you have here. To my experience you typically will run into memory problems as soon as you exceed a 200 MByte limit. This is because DOMs are created that are composed from individual objects. This approach results in a huge memory overhead that far exceeds the amount of data you want to process.
The only way for you to process files like that is basically to take a stream approach. The basic idea: Instead of parsing the whole file and loading it into memory you parse and process the file "on the fly". As data is read it is parsed and events are triggered on which your software can react and perform some actions as needed. (For details on that have a look at the SAX API in order to understand this concept in more detail.)
As you stated you are processing JSON, not XML. Stream APIs for JSON should be available in the wild as wel. Anyway you could implement one fairly easily yourself: JSON is a pretty simple data format.
Nevertheless such an approach is not optimal: Typically such a concept will result in very slow data processing because of millions of method invocations involved: For every item encountered you typically need to call a method in order to perform some data processing task. This together with additional checks about what kind of information you currently have encountered in the stream will slow down data processing pretty much.
You really should consider to use a different kind of approach. First split your file into many small ones, then perform processing on them. This approach might not seem to be very elegant, but it helps to keep your task much simpler. This way you gain a main advantage: It will be much easier for you to debug your software. Unfortunately you are not very specific about your problem, so I can only guess, but large files typically imply that the data model is pretty complex. Therefor you will probably be much better off by having many small files instead of a single huge one. And later it allows you to dig into individual aspects of your data and the data processing process as needed. You will probably fail getting any detailed insights into that while having a single large file of 5 GByte to process. On errors you will have trouble to identify which part of the huge file is causing the problem.
As I already stated you unfortunately are very unspecific about your problem. Sorry, but because of having no more details about your problem (and your data in particular) I can only give you these general recommendations about data processing. I do not know any details about your data, so I can not give you any recommendation about which approach will work best in your case.

How to read target extent from a huge TIFF file by LibTiff.Net library

I have a big tiff file which I don't want to load it into memory one time (That will cause my application takes so many memory), I want to load target part of it one time and show this part in screen.
I am trying to use LibTiff.net library for implement this, but I haven't found a suitable API for it.
Currently I can just load that by calloc a new array (very big!) then call ReadRGBAImageOriented function for load the RGBA value for it.
Do someone have experience on it?
Thanks

one big or more small objects in html5 local storage

is it better to store one big JSON object in local storage and append into it other data or use multiple small objects with datas in it? I would like to use it as history storage for any some application (so I think 5MB is enough).
Thanks
It really depends on the way you want to access your data:
If you want to access small parts of your data once in a while, you should put it into small objects. I did a couple of performance test a while ago and found the lookup of on objects in a localStorage filled with a lot of objects quit fast. If you use small objects, you might use less memory because you don't have to read and parse a big json object.
On the other side, keep in mind that reading localStorage is a blocking function. So if you need to iterate over the object, i might block your whole browser. In that case, it might be better to save the data in a huge chunck and read it at once.
It's certainly easier to call on one function and one item like so
localStorage.setItem('JSON', '{"whole":[{"lotta":"json"}],"data":"here"}');
you can also use localStorage like a regular object like so:
localStorage['JSON'] = '{"whole":[{"lotta":"json"}],"data":"here"}';
but either way you'll have to parse the JSON and run functions against it etc.
It really depends on how much history you want to store, and how you want to store it.
Also localStorage persists between visits, if you only need to keep the history during the current visit you can use sessionStorage exactly the same way.
I usually use it with more small objects, and I'm using the CarboStorage library for that, which is a wrapper around localStorage.