How to transfer large data between pages in Perl/CGI? - html

I have worked with CGI pages a lot and dealt with cookies and storing the data in the /tmp directory in Linux.
Basically I am running a query for millions of records using SQL, and am saving it in a hash format. I want to transfer that data to Ajax ( which eventually will perform some calculation and return a graph using Google API.
Or, I want it to transfer that data to another CGI page somehow.
PS : The data I am talking about here is in forms of 10-100+ MB's.
Until now, i've been saving that data on the file in the server, but again, it's a hassle to deal with that data on the server for each query.

You don't mention why it's a hassle to deal with the data on the server for each query, but assuming the hassle is working with the file, DBM::Deep might make it relatively easy to write the hash out and get it back again. Once you have that, you could create a simple script to return it as JSON and access it as needed from Javascript or other pages. Although I think the browser might slow down with 100MB JSON data structure.

Related

Bulk uploading data to Parse.com

I have about 10GB worth of data that I would like to import to Parse. the data is currently in JSON format which is great for importing data using the parse importer.
However I have no unique identifier to these objects. Of course they have unique properties e.g. a url, the ids pointing to specific objects need to be constant.
What would be the best way to edit the large amount of data -in bulk- on their server without running into request issues (as I'm currently on the free pricing model) and without taking too much time to alter the data.
Option 1
Import the data once and export the data in JSON with the newly assigned objectIds. Then edit them locally matching the url then replace the class with the new edited data. Any new editions will receive a new objectId by Parse.
How much downtime between import and export will there be as I would need to delete the class and recreate it? Are there any other concerns with this methodology?
Option 2
Query for the URL or array of URLs and then edit the data then re-save. This means the data will persist indefinitely but as the edit will consist of hundreds of thousands of objects will this most likely over run the request limit?
Option 3
Is there a better option I am missing?
The best option is to upload to Parse then edit through their normal channels. Using various hacks it is possible to stay below the 30pings/second offered as part of the free tier. You can iterate over the data using background jobs (written in Javascript) -- you may need to slow down your processing so you don't hit limits. The super hacky way is to download from the table to a client (iOS/Android) app and then push back up to Parse. If you do this in batch (not a synchronous for loop, by the way), then the latency alone will keep you under the 30ping/sec limit.
I'm not sure why you're worried about downtime. If the data isn't already uploaded to Parse, can't you upload it, pull it down and edit it, and re-upload it -- taking as long as you'd like? Do this in a separate table from any you are using in production, and you should be just fine.

Grails with CSV (No DB)

I have been building a grails application for quite a while with dummy data using MySQL server, this was eventually supposed to be connected to Greenplum DB (postgresql cluster).
But this is not feasible anymore due to firewall issues.
We were contemplating connecting grails to a CSV file on a shared drive( which is constantly updated by greenplum DB, data is appended hourly only)
These CSV files are fairly large(3mb, 30mb and 60mb) The last file has 550,000+ rows.
Quick questions:
Is this even feasible? Can CSV be treated as a database and can grails directly access this CSV file and run queries on it, similar to that of a DB?
Assuming this is feasible, how much rework will be required in the grails codes in Datasource, controller and index ( Currently, we are connected to Mysql and we filter data in controller and index using sql queries and ajax calls using remotefunction)
Will the constant reading( csv -> grails ) and writing (greenplum -> csv) render the csv file corrupt or bring up any more problems?
I know this is not a very robust method, but I really need to understand the feasibility of this idea. Can grails function wihtout any DB and merely a CSV file on a shared drive accesssible to multiple users?
The short answer is, No. This won't be a good solution.
No.
It would be nearly impossible, if at all possible to rework this.
Concurrent access to a file like that in any environment is a recipe for disaster.
Grails is not suitable for a solution like this.
update:
Have you considered using the built in H2 database which can be packaged with the Grails application itself? This way you can distribute the database engine along with your Grails application within the WAR. You could even have it populate it's database from the CSV you mention the first time it runs, or periodically. Depending on your requirements.

Dynamic JSON file vs API

I am designing a system with 30,000 objects or so and can't decide between the two: either have a JSON file pre computed for each one and get data by pointing to URL of the file (I think Twitter does something similar) or have a PHP/Perl/whatever else script that will produce JSON object on the fly when requested, from let's say database, and send it back. Is one more suited for than another? I guess if it takes a long time to generate the JSON data it is better to have already done JSON files. What if generating is as quick as accessing a database? Although I suppose one has a dedicated table in the database specifically for that. Data doesn't change very often so updating is not a constant thing. In that respect the data is static for all intense and purposes.
Anyways, any thought would be much appreciated!
Alex
You might want to try MongoDB which retrieves the objects as JSON and is highly scalable and easy to setup.

Where to store info besides mysql

My php script pulls about 1000 names from the mysql db on a certain page. These names are used for a javascript autocomplete script.
I think there's a better method to do this. I would like to update the names with a cronjob once a day (php) and store the names locally in a text file? Where else can I store it? It's not sensitive info.
It should be readable and writable to php.
Since you only need the data updated once a day, have a cron-script generate a static json file in some fixed location. Then read this with ajax on the client and make sure it caches it on the client.
Or potentially, generate the file whenever the database is updated (if this is applicable, I don't know your application)
You could try Memcache. But that could be like using a sledge-hammer to crack a nut.
Edit What about storing the data as simple file and let users (JavaScript) download it. Clients would not query the server for every key stroke because they could search for matching values themself. Format could be JSON because it is simple and JavaScript native.
It's unlikely reading from a text file will be much faster than a database query - MySQL already does a lot of caching that should make your query speedy.
If you need to make this query often and performance is a problem often you could consider using a caching module for PHP.
Related
The best way of PHP Caching

insert csv file into MySQL with user id

I'm working on a membership site where users are able to upload a csv file containing sales data. The file will then be read, parsed, and the data will be charted. Which will allow me to dynamically create charts
My question is how to handle this csv upload? Should it be uploaded to folder and stored for later or should it be directly inserted into a MySQL table?
Depends on how much processing needs to be done, I'd say. if it's "short" data and processing is quick, then your upload-handling script should be able to take care of it.
If it's a large file and you'd rather not tie up the user's browser/session while the data's parsed, then do the upload-now-and-deal-with-it-later option.
It depends on how you think the users will use this site.
What do you estimate the size of the files for these users to be?
How often would they (if ever) upload a file twice, can they download the charts?
If the files are small and more for one-off use you could upload it and process it on the fly, if they require repetitive access and analysis then you will save the users time by importing the data to the database.
The LOAD DATA INFILE command in MySQL handles uploads like that really nice.If you make the table you want to upload it to and then use that command it has worked great and super quick for me. I've loaded several thousand rows of data in under 5 seconds using it.
http://dev.mysql.com/doc/refman/5.5/en/load-data.html