I want to import data into my local machine.
is there any way to do it?
thanks,
Michael
cbtransfer tool is your friend for this one. As an added benefit, you can also give it regular expressions to only do a subset of data in case you care.
Related
i would like to know if there is any kind of tool to move data from one database to another. In my case i'm trying to move data from hsqldb to mysql.
I've already tried with mysql workbench, but it doesn't support hsqldb.
I think the worst case would be to export data into sql-files, modify these and try to import them in mysql.
Have you tried this.
https://confluence.atlassian.com/conf56/migrating-from-hsqldb-to-mysql-658737210.html
Hope this helps.
There are a few cross-platform tools to move data between different database. These include Flyway. See this page for a list of useful utilities http://hsqldb.org/web/hsqlUsing.html
My solution was SQL-Workbench, which can be used to copy/sync data between databases. You can execute sql-Files in the cli which makes it easy to run remote.
To copy/move/snyc/whatever your data, just use WbCopy.
example:
WbCopy -sourceConnection='username=SA,url=jdbc:hsqldb:/db/myDb'
-targetConnection='username=root,url=jdbc:mysql://someIp:3306/myDB'
-targetTable=SOMETABLE -sourceTable=SOMETABLE -ignoreIdentityColumns=false
-mode=insert,update -keyColumns=ID -deleteTarget=false -continueOnError=false;
We need to pull data into our data warehouse. One of the data source is from internal.
We have two options:
1. Ask the data source team to expose the data through API.
2. Ask the data source team dump the data at a daily base, grant us a read only db credential to access the dump.
Could anyone please give some suggestions?
Thanks a lot!
A lot of this really depends on the size and nature of the data, what kind of tools you are using, whether the data source team knows of "API", etc.
I think we need a lot more information to make an informed recommendation here. I'd really recommend you have a conversation with your DBAs, see what options they have available to you, and seriously consider their recommendations. They probably have far more insight than we will concerning what would be most effective for your problem.
API solution Cons:
Cost. Your data source team will have to build the api. Then you will have to build the client application to read the data from the api and insert it into the db. You will also have to host the api somewhere as well as design the deployment process. It's quite a lot of work and I think it isn't worth it.
Pefromance. Not necessary but usually when it comes to datawarehouses it means one has to deal with a lot of data. With an api you will most likely have to transform your data first before you can use bulk insert features of your database
Daily db dumps solution looks way better for me but I would change it slightly if I were you. I would use a flat file. Most databases have a feature to bulk insert data from a file and it usually turns out to be the fastest wat to accomplish the task.
So from what I know from your question I think you should the following:
Aggree with you data source team on the data file format. This way you can work independently and even use different RDBMS.
Choose a good share to which both the data source team db and your db have fast access to.
Ask the data source team to implement the export logic to a file.
Implement the import logic from a file.
Please note items 3 and 4 should take only a few lines of code. As I said most databases have built in and OPTIMIZED functionality to export/import data to a file.
Hope it helps!
could anyone advise me direct me to a site that explains the best way to go about this I'm sure I could figure it out with allot of time invested but just looking for a jump start. I don't want to use the migration tool either as I just want to put fmp xml files on the server and it create new MySql databases based on the fmpxml results provided
thanks
Technically you can write a XSLT to transform the XML files into SQL. It's pretty much straightforward for data (except data in container fields), but with some effort you can even transfer the scheme from DDR reports (but I doubt it worth it for a single project).
Which version of MySQL? v6 has LOAD XML which will make things easy for you.
If not v6, then you are dealing with stored procedures, which can be a pain. If you need v5, it might make sense to install MySQL6, get the data in there using LOAD XML, and then do a mysqldump, which you can import into v5.
Here is a good link:
http://dev.mysql.com/tech-resources/articles/xml-in-mysql5.1-6.0.html
What is the best way to create an Archive of image documents in the database ?
Given we have about 2-10 million records and each record includes 2-4 images and about 20 text fields , what is the best way for create this archive so that we have good speed and high security for data?
Also, what database is good for this project?
Definitely use the file system as Minor suggested.
One option is SQL Server FILESTREAM. See http://msdn.microsoft.com/en-us/library/cc949109.aspx.
Use file system storage for archive image. You must save link in DB for the image file. And if you use a HTTP content you can use the cache proxy server such as Squid, Nginx, etc.
More questions for you:
How dynamic is the data? Do you store it once and never change it or it gets frequently changed?
Do you need versioning for the documents or the latest version overwrites the previous and that's it.
Are the documents always edited using one application or they can be changed outside (ex: using Word)
Are the documents related to other "non-document" data (database rows) or is it the only thing that you need to store?
File system won't offer any real security, so I would discount that straight off.
In Oracle there is built-in image support through the ORDImage type.
Check out Marcel's blog as he, and the Piction company, do a lot of work in this area and he has lots of useful material to download.
You can use control downloads. Look at http://kovyrin.net/2006/11/01/nginx-x-accel-redirect-php-rails/lang/en/
Are there any ways to import data in databases such as MS SQL, MySQL into in-memory databases like HSQLDB, H2 etc ?
H2 supports a special database URL that initialized the database from a SQL script file:
"jdbc:h2:mem;INIT=RUNSCRIPT FROM '~/create.sql'"
HSQLDB and Apache Derby don't support such a feature as far as I know.
I think you need to do
query the data out from MS SQL
import the data into in-memory DB with its API
Either SQL expressions or DB related APIs
In Hibernate: Adding import.sql to the class path works great, hbm2dll checks if the file exists and executes it. The only details is that every sql command most be on one row, otherwise it will fail to execute
You could dump the data as SQL INSERT statements then read it back.
You could read to a temporay object (like a struct) then write back to the internal db.
Look at the free "universal database converter" http://eva-3-universal-database-converter-udc.optadat-com.qarchive.org/ -- it does claim to support MySQL, MS-SQL, and HSQLDB, among others.
It really depends on what ways you think about.
Is there a tool that could do it automatically without programming? Maybe.
Do you want to develop it? Then find out whether your favorite language supports both database engines(standard and in memory) and if it does, just write a script that does it.
Process everything in chunks(fetch n rows at a time then insert them; repeat). How big the chunk size? It's up to you, try different sizes(say 100, 500, 1k etc.) see which one performs better on your hardware, fine tune to the sweet spot.
If your favorite language on the other hand doesn't support both of them, try using something that does.
You can use dbunit for dumping the database to xml files and importing it back to another rdbms.
Latest versions of HSQLDB allow you to open a CSV (comma separated values) or other delimiter separated data file as a TEXT TABLE in HSQLDB even with mem: databases, which can then be copied to other tables.
As others have pointed out, there are also capable and well maintained third party tools for this purpose.