I am new to Database programming and there is some general questions that I would like to ask. I created the schema in my localhost using mySQL and linked to eclipse. There are some problems that I do not know how to approach.
One of my friends would like to help to develop at his personal machine, but he could not link to my database server. So one way is to copy the schema to his mySQL and change the connection string, are there any better ways?
If I would to release the project and run it on different machines, will it affect the databases operation since the schema resides in my local server.
Is there ways to just like attach the database inside the project since it is a local database and I am not accessing it from any other programs.
Sorry if my questions sound very stupid. I am really new.
Your approach of running local test MySQL instance for every developer sounds fine.
However, if your application never needs any data shared (essentially database is always local as you stated), then you should ask yourself if MySQL really an appropriate solution for your application. I think SQLite is a better choice in that situation - it is likely to work faster than MySQL for local access, and it has essentially zero setup - no database daemon to run or anything like that which greatly simplifies application install.
Granted, SQLite has its limits, but I often use SQLite for my projects, and only if they grow large, I might migrate them to MySQL or PostgreSQL if task requires it.
Typical signs that SQLite may not cut it:
Many clients (10+) need to access database for writing
Total database size is very large - more than 10GB total
Related
I am working at a company that has some CRM software running in a remote Windows XP server that uses a SQLAnywhere 9 db to store its data; I have access to this remote server with an administrator account.
I would like to extract the db into a .sql file so that I can run the db locally on my machine without affecting the running db in the server (since it is key for the company's day to day operation).
The reason I need this is that we are going to test some BI Software and we need data from this database to test it, but we don't know the structure of the database since the developers of the CRM software didn't give us any documentation on it. So we need to have the database locally so that, without affecting the running CRM, we can:
understand the structure by looking at the DDL
make queries to it to get sample data
I researched a bit, and the most common solution to my problem was to use dbunload on the remote server to unload the db into a reload.sql file that contained what I needed. But most tutorials on the subject mention that I have to stop the db first (which would be catastrophic). If this is the only option, then I guess I am willing to do it on the weekend when the CRM is not used, but I wanted to know if there was another solution first.
If there is no other solution, can you point me to where I can find the proper and safer way to do this?
I have researched a lot, but prior to this day I have never even heard of SQLAnywhere, so I really need all the help I can get. My main concern is doing something that impacts negatively the CRM software.
Thank you.
You can run dbunload across the network, you just have to tell it to do an "external" unload. The default is to do an internal unload which would only work from the machine where the database server is running.
I don't have SQL Anywhere 9 documentation right now to look up the exact switch, but dbunload -? should show you all the possible switches.
Edit:
-an will create a new database and load the data and schema from another data
-xi switch will do external unload and internal reload.
-c parameters to connect to your remote database
I need to migrate data from SQL Server 2000 to MySQL. Currently I am using MySQL workbench. Can any one tell how to do this?
If you are searching for a tool MySQL has a tool called "MySQL Migration Toolkit" that can be used to migrate the data from SQL to mysql. But the first thing as mentioned above is to do a backup. The next thing to check would be whether there are any datatypes that cannot be converted?
Exactly, what have you tried? You can quickly migrate data from MSSQL to MySQL if they are in any of the following data file formats:
Paradox (.db)
DBase (.dbf)
Delimited Text (.txt)
Excel (.xls)
XML (.xml)
MS Access Database (.mdb)
ODBC
If its a one-to-one, exact same database architecture on both the new and the old servers, you might want to try using database tools meant to make this an easier process for GUI based administrators. Just go to download.com and find some software that may assist you in that migration, Navicat is a good one. The most important thing is to always BACK IT UP! BACK IT UP! BACK IT UP! Never do anything without mirroring drives and doing whatever it takes to make sure if you don't destroy any data, but if you do you'll have backup copies of it. Also how fast your machines are will be a sizable factor when it comes to migrating very large databases.
All in all you have many options to choose from, yet the most important thing is to back it up! Can't stress that enough, yeah it might seem like meaningless extra work especially on humongous database systems, but trust me, its better to be safe than sorry. Also, I always like rebooting machines prior to making database changes to them, cutting off any connection to the outside world or any processes depending or updating its data. Turning off the ODBC will do much of that for you on Windows as well, but as always better safe than sorry. Many a corruption can be avoided by simply booting the machine and having all data in memory finalized on the active database, before backing it up or appending to it.
Check out etlalchemy. It is a free, open-sourced Python tool capable of migrating between any of the following SQL databases: PostgreSQL, MySQL, Oracle, SQL Server, and SQLite.
To install: pip install etlalchemy
To run:
from etlalchemy import ETLAlchemySource, ETLAlchemyTarget
# Migrate from SQL Server onto MySQL
src = ETLAlchemySource("mssql+pyodbc://user:passwd#DSN_NAME")
tgt = ETLAlchemyTarget("mysql://user:passwd#hostname/dbname",
drop_database=True)
tgt.addSource(src)
tgt.migrate()
Our Java server application logs data to a SQL database, which may or may not be on the same machine. Currently we use MS SQL Server, and we're now porting to MySQL. A user configures database backup parameters on our app server, e.g. time of day to run a backup, and the app server executes SQL Server's BACKUP DATABASE command at the appropriate time, via a sproc. It does incremental backups daily and full backups weekly.
MySQL lacks an equivalent feature to tell the database from a client connection to back itself up. Options we're considering are:
Create a UDF to shell out to mysqldump (or copy database files), which can be called from our app server via a sproc. Essentially we'd be implementing a version of BACKUP DATABASE for MySQL.
Create a service to run on the MySQL box that can get the backup settings from the app server and run mysqldump (or file copy) locally.
Create a backup sproc to mimic mysqldump, e.g. SHOW CREATE TABLES and SELECT INTO OUTFILE for each table.
Setting up a cron job, Perl script, third-party app or other tricks that'd work great in a data center aren't preferred; this is a shrink-wrap package that needs to be pretty robust and hands off.
Database sizes can range from roughly 10MB to 10GB.
I'm aware of the binary logs for the incremental piece. I figure the general solution will probably apply to them as well, if we decide to use them.
This is all on Windows 2003 32-bit or 2008R2 64-bit, MySQL 5.1.
The UDF option seems the best to me. The UDF Repository (http://www.mysqludf.org/) has mysqludf_sys, which may be all we need, but I thought I'd ask for opinions since after extensive googling it doesn't seem like others have reached the same conclusion, or maybe our needs are just out of the ordinary. Our app is the only thing in MySQL, so I'm not worried about other users having access to our UDF.
Any solutions I'm overlooking? Any experience with using UDFs in such a way?
Thanks,
Eric
For this an other reasons we decided to collocate our application with the database, so this problem became moot.
I'm developing an app which will have a central database users can add entries to. The database will have to be on a server somewhere but I want the users to be able to add entries offline. The app will sync to the main db when connection is available. So, I supose I need 2 databases - the main one sitting on a server (preferably linux) and a small one on each client machine to use as a buffer when offline. The app will be coded in c# for windows. I'm having trouble deciding what databases to use and whether I can leverage any replication technology to make this easier. Also, I don't want to pay for anything ;) So I guess my questions are...
Will I have any trouble writing code in ADO.NET to move data from something like SQL Compact Edition to MySQL?
Are there any replication solutions which will move stuff from local to main database for me
I've recently discovered IBM's db2 expressC but I'm not sure if it's serverless as well as server installed. Does anyone know?
Firebird can be server or serverless. Can I replicate between them. Is the server mode capable of heavy use?
Firebird can be server or serverless.
Can I replicate between them.
Yes.
Is the server mode capable of heavy
use?
Define 'heavy use'. I've had production systems with 200 simultaneous users pumping 20 transactions/minute each on databases in the 10-20GB range. I'm sure there are many larger deployments out there.
Also, what you describe seem like the 'briefcase model'. You should look into it if you haven't already done so. Maybe the solution is not replication at the database level, but rather a smarter fat client.
Just answering two of your questions; I don't know about DB2 or Firebird.
Will I have any trouble writing code in ADO.NET to move data from something like SQL Compact Edition to MySQL?
That should be very trivial; install MySQL Connector/NET and you're good to go.
Are there any replication solutions which will move stuff from local to main database for me
SQL Server replication is made for this, but I don't suppose it would work with MySQL.
I currently have an MS Access application that connects to a PostgreSQL database via ODBC. This successfully runs on a LAN with 20 users (each running their own version of Access). Now I am thinking through some disaster recovery scenarios, and it seems that a quick and easy method of protecting the data is to use log shipping to create a warm-standby.
This lead me to think about putting this warm-standby at a remote location, but then I have the question:
Is Access connecting to a remote database via ODBC usable?
I.e. the remote database is maybe in the same country with ok ping times and I have a 1mbit SDSL line.
onnodb,
The PostgreSQL ODBC driver is actively developed and an Access front-end combined with PostgreSQL server, in my opinion makes a great option on a LAN for rapid development. I have been involved in a reasonably big system (100+ PostgreSQL tables, 200+ Access forms, 1000+ Access queries & reports) and it has run excellently for a few years, with ~20 users. Any queries running slow because Access is doing something stupid can generally just be solved by using views, and any really data-intensive code can easily be moved into PostgreSQL functions and then called from Access.
The only main ODBC-related issue we have is that there is no way to kill a slow running query from Access, so we do often get users just killing Access and then massive queries are just left executing on the server.
Yes.
I don't have any experience using Access to hit PostgreSQL from a remote location but I have successfully used Access as a front-end to SQL Server & DB2 from a remote location with success.
Ironically, what you don't want to do is use Access to front-end an Access database (mdb) from a remote location over a high-latency link. Since hitting the MDB uses file-based operations it's pretty easy to end up with a corrupt database if you have anything more than a trivial db.
It depends a lot on the database you're using as a back-end. I've had rather terrible experiences with MySQL as a back-end. Make sure the ODBC link you're using is actively developed, stable and complete --- this was definitely not the case for MySQL. You may also want to check for any compatibility issues between Access and Postgre. And, of course, it won't hurt to test extensively.
Oh, and I think it'd be absolutely great if you could post back here later with your experiences!
PostgreSQL works great as a backend for MS Access, there are a couple of support functions you should use to make things easier. See here for more info on this:
http://www.amsoftwaredesign.com/smf/index.php?board=8.0