Where to store info besides mysql - mysql

My php script pulls about 1000 names from the mysql db on a certain page. These names are used for a javascript autocomplete script.
I think there's a better method to do this. I would like to update the names with a cronjob once a day (php) and store the names locally in a text file? Where else can I store it? It's not sensitive info.
It should be readable and writable to php.

Since you only need the data updated once a day, have a cron-script generate a static json file in some fixed location. Then read this with ajax on the client and make sure it caches it on the client.
Or potentially, generate the file whenever the database is updated (if this is applicable, I don't know your application)

You could try Memcache. But that could be like using a sledge-hammer to crack a nut.
Edit What about storing the data as simple file and let users (JavaScript) download it. Clients would not query the server for every key stroke because they could search for matching values themself. Format could be JSON because it is simple and JavaScript native.

It's unlikely reading from a text file will be much faster than a database query - MySQL already does a lot of caching that should make your query speedy.
If you need to make this query often and performance is a problem often you could consider using a caching module for PHP.
Related
The best way of PHP Caching

Related

can i create a database without mysql on raspberry pi

I've got a raspberry pi with raspbian and all I've done is installed apache2 and created a small web site i want to create a database.
is this possible without using mysql or other database software. i want to use .JS or a text based database
I want to be able to save the contact details in a text format.
can someone point me in the right direction a simple example would be appreciated all online research wants mysql etc
all i want is a simple example as in enter name and submit i want that name to be logged so if name entered again it will say welcome back once i know this mechanism i can add all the other fields. The reason i want this format is so i can see the list that I'm creating.
i just can't get to grips with mysql I've spent months trying to understand mysql but its just not going in so want to simplify the database to minimal workings so i can complete my site. I know .Js isn't so secure but its a demo so security not important at this point any help appreciated
It would be possible to use JSON for your data storage. It will be a key-value storage. On each page view you will have to load the entire file into memory and parse it. From then on it is possible to loop it to search data or get data from a key. This requires no extra software just PHP with Apache.
How to:
Build an array, use json_encode to create the JSON and save it using file_put_contents(). Remebmer to save the whole array and not just the newly added element.
This is not a relative database but might do the trick if you build an intelligent system with cookie's to store an ID that is associated with a user.
Alternative you could use serialize() instead of json;
If you don't mind to use different way of storing data you can use either Google App Engine or mongolab or other cloud based databases

Preemptively getting pages with HTML5 offline manifest or just their data

Background
I have a (glorified) CRUD application that I'd like to enable HTML5 offline support with. The cache-manifest system looks simple yet powerful, but I'm curious about how I can allow users to access data while offline.
For example, suppose I have these pages for the entity "Case" (i.e. this is CRM case-management software):
http://myapplication.com/Case
http://myapplication.com/Case/{id}
http://myapplication.com/Case/Create
The first URI contains a paged listing of all cases, using the querystring parameters pageIndex and pageSize, e.g. /Case?pageIndex=2&pageSize=20.
The second URI is the template for editing individual cases, e.g. /Case/1 or /Case/56.
Finally, /Case/Create is the form used to create cases.
The Problem
I would like all three to be available offline.
/Case
The simple way would be to add /Case to the cache-manifest, however that would break paging (as the links wouldn't work).
I think I could instead add something like /Case/AllData which is an XML resource, which is cached and if offline then a script on /Case would use this XML data to populate the list and provide for pagination.
If I go for the latter, how can I have this XML data stored in the in-browser SQL database instead of as a cached resource? I think using the SQL database would be more resilient.
/Case/{id}
This is more complicated. There is the simple solution of manually adding /Case/1, /Case/2, /Case/3 etc... to /Case/1234, but there can be hundreds or even thousands of cases so this isn't very practical.
I think the system should provide access to the 30 most recent cases, for example. As above, how can I store this data in the database?
Also, how would this work? If I don't explicitly add /Case/34 to the manifest and the user clicks on to /Case/34 how can I get the browser to load a page that my JavaScript will populate based on the browser's SQL database data and not display the offline message?
/Case/Create
This one is more simple - as it's just an empty page and on the <form>'s submit action my script would detect if it's offline, and if it is offline then it would add it to the browser's SQL database. Does this sound okay?
Thanks!
I think you need to be looking at a LocalStorage database (though it does have some downsides), but there are other alternatives such as WebSQL and IndexedDB.
Also I don't think you should be using numeric Id's if you are allowing people to create as you will get Primary Key conflicts, it is probably best to use something like a GUID.
Another thing you need is the ability to push those new cases onto the server. there could be multiple...
Can they be edited? If they can I think you really need to be thinking about synchronization and conflict resolution hard very hard if that is the case.
Shameless self promotion, I have a project that is designed to handle these very issues, though it's not done, it's close. You can see it (with an ugly but very functional) demo at https://github.com/forbesmyester/SyncIt

How to modify and save html/css in server-side?

I'm new in this subject so this might be a silly question for most of you. I have a simple server which several users will access. If any of them change a CSS property of an element, the others should be able to see the change in real time.
Should I use something like node.js to perform this? How do I save the changes the users do?
The page would look something like this: http://stom89.dyndns.org/
Thanks!
I guess what you want to change in your CSS / html , are states. Like if a lamp is on/off? Then you need to save each state in a mySQL DB and just grab the data for each user. If you want it to look like realtime for online users, then use js(ajax) to sync data regularly.
Alternative way without a DB would be with files.
If you don't wanna use mysql for this, you can use files. I suggest using ini files. For more on how to read/write ini files, you can visit this question. It's super simple and you'll be able to have each variable in a nifty array.
What you need: A bit of PHP, a little bit of jQuery (or js), understanding of GET variables
I suggest you create 3 files.
index.php :
Your main page which is the client. Pulls info using get
variables. You can use jQuery.get() for this.
getstate.php :
This is the file which will read the ini file and give you back the states for each device. Read them with jQuery.get() from index.php .
savestate.php:
This is the file which you'll send the new states to from index.php Example request: http://address.goes.here/savestate.php?bedroomlight=1&garagelight=0
Whats even more interesting is that ini files can be written/read easily by many programming languages so you can manipulate the data using your Raspberry Pi easily. (say someone turns of a light, a script polling state could change the ini file)
I think you would need to use a sql database and have a javascript to detect changes and update through AJAX. That's my best idea.
I have been messing with this subject for sometime if I completely understand your question. I would suggest looking at python, ruby or node.js though I could not say which is the easiest to learn for you though I would suggest python and a comet server which could be ape and simply have the server push the updates to the users that are already on the site.
Edit:
Suggestions for polling :: jQuery
http://api.jquery.com/jQuery.get/ for standard data retrieval which is about all you will need.

How to transfer large data between pages in Perl/CGI?

I have worked with CGI pages a lot and dealt with cookies and storing the data in the /tmp directory in Linux.
Basically I am running a query for millions of records using SQL, and am saving it in a hash format. I want to transfer that data to Ajax ( which eventually will perform some calculation and return a graph using Google API.
Or, I want it to transfer that data to another CGI page somehow.
PS : The data I am talking about here is in forms of 10-100+ MB's.
Until now, i've been saving that data on the file in the server, but again, it's a hassle to deal with that data on the server for each query.
You don't mention why it's a hassle to deal with the data on the server for each query, but assuming the hassle is working with the file, DBM::Deep might make it relatively easy to write the hash out and get it back again. Once you have that, you could create a simple script to return it as JSON and access it as needed from Javascript or other pages. Although I think the browser might slow down with 100MB JSON data structure.

Can I run an HTTP GET directly in SQL under MySQL?

I'd love to do this:
UPDATE table SET blobCol = HTTPGET(urlCol) WHERE whatever LIMIT n;
Is there code available to do this? I known this should be possible as the MySQL Docs include an example of adding a function that does a DNS lookup.
MySQL / windows / Preferably without having to compile stuff, but I can.
(If you haven't heard of anything like this but you would expect that you would have if it did exist, A "proly not" would be nice.)
EDIT: I known this would open a whole can-o-worms re security, however in my cases, the only access to the DB is via the mysql console app. Its is not a world accessible system. It is not a web back end. It is only a local data logging system
No, thank goodness — it would be a security horror. Every SQL injection hole in an application could be leveraged to start spamming connections to attack other sites.
You could, I suppose, write it in C and compile it as a UDF. But I don't think it really gets you anything in comparison to just SELECTing in your application layer and looping over the results doing HTTP GETs and UPDATEing. If we're talking about making HTTP connections, the extra efficiency of doing it in the database layer will be completely dwarfed by the network delays anyway.
I don't know of any function like that as part of MySQL.
Are you just trying to retreive HTML data from many URLs?
An alternative solution might be to use Google spreadsheet's importHtml function.
Google Spreadsheets Lets You Import Online Data
Proly not. Best practises in a web-enviroment is to have database-servers isolated from the outside, both ways, meaning that the db-server wouldn't be allowed to fetch stuff from the internet.
Proly not.
If you're absolutely determined to get web content from within an SQL environ, there are as far as I know two possibilities:
Write a custom MySQL UDF in C (as bobince mentioned). The could potentially be a huge job, depending on your experience of C, how much security you want, how complete you want the UDF to be: eg. Just GET requests? How about POST? HEAD? etc.
Use a different database which can do this. If you're happy with SQL you could probably do this with PostgreSQL and one of the snap-in languages such as Python or PHP.
If you're not too fussed about sticking with SQL you could use something like eXist. You can do this type of thing relatively easily with XQuery, and would benefit from being able to easily modify the results to fit your schema (rather than just lumping it into a blob field) or store the page "as is" as an xhtml doc in the DB.
Then you can run queries very quickly across all documents to, for instance, get all the links or quotes or whatever. You could even apply XSL to such a result with very little extra work. Great if you're storing the pages for reference and want to adapt the results into a personal "intranet"-style app.
Also since eXist is document-centric it has lots of great methods for fuzzy-text searching, near-word searching, and has a great full-text index (much better than MySQL's). Perfect if you're after doing some data-mining on the content, eg: find me all documents where a word like "burger" within 50 words of "hotdog" where the word isn't in a UL list. Try doing that native in MySQL!
As an aside, and with no malice intended; I often wonder why eXist is over-looked when people build CMSs. Its a database that can store content in its native format (XML, or its subset (x)HTML), query it with ease in its native format, and can translate it from its native format with a powerful templating language which looks and acts like its native format. Sometimes SQL is just plain wrong for the job!
Sorry. Didn't mean to waffle! :-$