MySQL database in a different server than MySQL server - mysql

I had been using MySQL server for all my database tasks but it's always giving me problems when trying to connect.Is it possible to use some different server like JBOSS, wildfly or Tomcat to host my SQL database?

If you have problems with the connection, I'd rather advocate for fixing those connection problems than jumping ship just because of this, because you'd give up a lot of additional features that a server like mysql provide under the hood (if only the ability to run an application independent backup).
That being said, there's hsqldb which you can run embedded. Note though that this won't be good if you have large amounts of data (as it's generally "in memory") and the file format is not the most robust: It's saving its content in a raw text format that will sometimes be rewritten. If your server just crashes or runs out of memory, this might corrupt your database. Full fledged servers (like mysql or postgresql or others) typically go quite some length to save you from loss of data in these cases.
hsqldb would basically just be a driver that you embed in your web application and configure its storage location through the database URL that you specify.
Perfect for quick demos, proof of concept, development and test. Your mileage may vary for actual production deployment where you can't risk to ever lose your data.

Related

Classic ASP + MSAccess extremely slow on IIS7.5

I migrated my classic asp site from IIS6 to a much powerful server with Windows Server 2008 R2 and IIS7.5, but it actually runs even slower.
Every simple call to the MSAccess database is taking forever. Many times the request is dropped because of Session timeout (120 seconds).
Any idea what can cause the problem and how to solve it?
Thank You.
Before blaming Access and moving to SQL Server Express or another database, you need to make sure you know where the slowdown occurs.
From what you are motioning, it looks like at least some of the queries don't even work (IIS times out after 120s).
Access databases, especially if they are accessed locally by a handful of concurrent users, are fast.
Moving to another database may or may not solve the problem, but it will be probably be a lot more work than solving your issue with your current Access database.
That said, if your website needs to server lots of concurrent users (say more than 50 at a time) you may need to look into moving to a full database server like MySQL, SQL Server Express or PostgreSQL for instance.
A few things to make sure you double check:
Corrupted database. Make sure you use Compact and Repair regularly as a regular maintenance measure (make a backup first).
Incorrect filesystem rights.
Make sure the your IIS process has read/write rights to the folder where the database is located so that it is able to create the lock file (.ldb or .laccdb depending on whether you are using .mdb or the new .accdb database format).
A related issue is that the IIS process must be able to create temporary files in the temporary folder, for instance %SystemDrive%\Windows\ServiceProfiles\NetworkService\AppData\Local\Temp.
Bad queries. Open the database with Access and run the queries to check how long they really take and if they return any errors.
If there are data integrity issues, it could be that the query returns unexpected results that could have strange side-effect to the code in your asp page.
Check your IIS logs for errors. Also check the OS Event Log.
Make sure there are no other errors that could incorrectly cause the behaviour.
Make sure you profile your asp code to find out exactly which queries and parts of your code are slow and which are fine.
Once you have solved your issues. Improve performance by keeping the database open to avoid the lock file being create/deleted all the time (this can have a huge impact on performance).
A good reference with more detailed information on some of the topics above: Using Classic ASP with Microsoft Access Databases on IIS

Looking to make MySQL db records readable as CSV-like lines in text file via standard file io interface Linux device driver (working with legacy code)

I want to be able to read from a real live proper MySQL database using standard file access routines. I don't mean reading the MySQL database's own underlying private files. What I mean is implementing a file-based linux device driver that "presents" a MySQL database as a file. In other words, the text file is a "View" of the MySQL database. The MySQL records are presented in our homegrown custom variation of the CSV format that the legacy code was originally written to understand.
Background
I have some legacy code that reads from a text file that contains a very large table of data, each line being a separate record. New records (lines) need to be added but there is contention for the file among the team, there is also an overhead in deployment of the legacy code and this file to many systems when releasing the software to them. The text file itself also needs to be version controlled.
Rather than modify the legacy code to call a MYSQL database version of these records directly, I thought it would be better to leave it untouched. This would avoid risks in modifying the code and ease deployment and moreover, modifying the code would cause much overhead in de-risking, design discussions, more testing etc.
So what I'm looking to do is write a file-based device driver such that this makes the MySQL database appear as a file to the legacy code, with the data within the format that the legacy code expects. That way the legacy code is not changed and can work oblivious that the file is really an underlying database. Contention is removed because the individual records in the database can now be updated/added to separately (via MySQL, or even better a separate web admin interface that guides and validates data entry from the user for individual records) and deployment effort is much reduced without having to up-issue the whole file on all the systems that use it.
The device driver would contain routines to internally translate standard file read operations into MySQL queries to the MySQL database and contain routines to return the MySQL results and translate these into the text format for returning back to the file read operation.
This is for a Linux/Unix platform.
Has this been done and what are your thoughts?
(cleaned up the question, grammar, clarification, readability. This does not affect the accepted answer.)
This kind of thing has been done before - an obvious example being the dynamic view filing system in ClearCase which provided (maybe still does?) a virtualised view onto a version control repository. Behind the scenes it implemented an object cache and used RPC to fetch objects from other hosts if necessary, and made extensive use of both local and remote databases.
It's fairly clear that you are going to implement the bulk of your filing system in user-space, but you will need a (small) kernel resident portion. Unless there's a really good reason to do otherwise, FUSE is what you're looking for - it will provide the kernel-resident part for you. All you'll need to write is glue to turn file operations into SQL requests.

Mysql with Node.js: Does it make sense to have node.js save/load stuff to/from the database all the time?

So I have a small game in node.js(only the server of course) which has map data and player accounts stored in a mysql database. Right now I constructed it in a way that minimizes the amount of queries made by loading data from the database and keeping it in javascript objects/arrays or whatever seems appropriate and only writing to the database when needed.
Now I was thinking: Is this really worth it? In many cases it would be alot better(in terms of data would be more save and WAY more up-to-date) to hardly store data in the server and just loading it from the database when needed(respectively writing when it needs to be changed).
My question is: Is it efficient/save/recommendable to have the server read/write from the database often rather than having data from the database in javascript variables in the server?
Additional info:
-The nodejs server and my mysql server are on the same machine and a query usually takes less than 1ms or maybe 3ms for big queries like loading room data.
-I am using a module simply called mysql.
-If needed I will include extra info, just ask in a comment.
Really depends on your Use-Case. Generally speaking, I would not add another layer of caching in node.js but handle that in your db with a bigger cache and optimized queries.

How does a server farm handle a database?

I have been making some research in the domain of servers for a website I want to launch. I thought of a certain configuration of a server with RAID 10 implemented with a NAS doing the backup which has a RAID 10 configuration as well. This should keep data safe in 99.99+ of cases.
My problem appeared when I thought about the need of a second server. If I shall ever require more processing power and thus more storage for users, how can I connect a second server to my primary one and make them act as one what the database (mySQL) is regarded?
I mean, I don't want to replicate my first DB on the second server and load-balance the request - I want to use just one DB (maybe external) and let the servers use it both at the same time. Is this possible? And is the option of backing up mySQL data on a NAS viable?
The most common configuration (once scaling up from a single box) is to put the database on its own server. In many web applications, the database is the bottleneck (rather than the web server); so the first hardware scale-up step tends to be to put the DB on its own server.
This also allows you to put additional security between the database and web server - firewalls are common; different user accounts etc. are pretty much standard.
You can then add web servers to the load balancer, all talking to the same database, as long as your database can keep up.
Having more than one web server also helps with resilience - you can have a catastrophic hardware event on one webserver and the load balancer will direct the traffic to the remaining machines.
Scaling the database server performance is a whole different story - though typically you use very beefy machines for the database, and relative lightweights for the web servers.
To add resilience to the database layer, you can introduce clustering - this is a fairly complex thing to keep running, but protects you against catastrophic failure of a single machine.
Yes, you can back up MySQL to a NAS.

Which should I use for working with a large amount of data, MySQL or SQLite?

I need to work with a fairly large amount of data, and am considering both MySQL and SQLite. So I'm trying to get a good, high-level overview of both packages:
How well do each handle large databases?
Is SQLite as much of a handful to work with as MySQL?
Are there any good (web-based) resources comparing these two?
SQLite is a database library and runs only in the program which uses it. It cannot be written to at the same time from other programs although other processes can read from it. You cannot connect remotely to it and saves the data on a local accessible filesystem (possibly mounted from a file server). Forget what I said : these statements were based on outdated assumptions, and I need to read up on sqlite3 because it can now do things I was not aware of.
MySQL is a database server, i.e. can run on another machine and multiple computers and programs can connect to it at the same time.
ALthough SQLite can also handle quite large datasets, in most circumstances people will choose MySQL for large datasets, because they want remote access (without exposing the database files to well intentioned, inadvertent "cleanup" actions) to the data while the program is running for administrative purposes or to run reports.
If your application is an embedded database which only ever will be used by a single application SQLite will be just fine.
And no, SQlite is not such a handful as MySQL. MySql is not really difficult, but it has a number of strange quirks which hit people when they try to get it installed. Once it is running it is pretty painless.
You might look at PostgreSQL, as I find it a bit easier to manage and maintain as I feel some aspects are more 'logical' than MySQL. That being said, in practice there is not a huge difference.