SQL Database Migration to Salesforce/Another CRM (One Time Import) - mysql

Currently have a ~2GB CRM database that's built on mysql + cold fusion and running on our local MS2012 server. Looking to move it to a more usable/up-to-date solution that would allow flexibility, security, and back-up solutions. Also no longer going to be running on cold fusion.
I received the full backup database in a .bak and have restored successfully in Microsoft SQL Server Management Studio so I see the massive list of tables, views, programmability, service broker, storage, security.
Salesforce seems to be a good bet, as we would likely be able to hire someone in the event that I leave so someone could pick it back up and work with it. Also, Salesforce makes sense in what we're trying to do with the CRM.
I'm unsure about how to do this migration. Right now I'm working on a backup copy to practice and put a process in place to ensure we have a smooth transition because the company is still doing their day to day on CF until we have a set in stone stop date to do the transfer. It'll be a one-time transfer so I don't need to establish a constant connection, I just want to pull in all the database tables, values, relationships, etc and then get everyone setup. I realize pulling in users with login information might not be feasible and I would have to create users in salesforce. I do want to have the data that each user has put in retained though.
There might be some additional data you guys need to fully answer the question so please let me know if I have some crucial gaps that would help get the proper answer.

DBAmp is pretty much industry standard at this point. https://appexchange.salesforce.com/listingDetail?listingId=a0N300000016bWzEAI. It will allow you to do CRUD from SQL.
Also, when you sign an agreement with Salesforce, they will also hook you up with an integration partner to whom you will pay large sums of money to help you with the transition.
Edit: Sorry, I guess I didn't really answer your full question. Yes, you are missing large, LARGE pieces of information. DBAmp will help you with your data, but don't think you will just be able to import your data structure over.

Related

How to update mysql tables between computers

I'm working on a group project where we all have a mysql database working on a local machine. The table mainly has filenames and stats used for image processing. We all will run some processing, which updates the database locally with results.
I want to know what the best way is to update everyone else's database, once someone has changed theirs.
My idea is to perform a mysqldump after each processing run, and let that file be tracked by git (which we use religiously). I've written a bunch of python utils for the database, and it would be simple enough to read this dump into the database when we detect that the db is behind. I don't really want to do this though, less it clog up our git repo with unnecessary 10-50Mb files with every commit.
Does anyone know a better way to do this?
*I'll also note that we are Aerospace students. I have some DB experience, but it only comes out of need. We're busy and I'm not looking to become an IT networking guru. Just want to keep it hands off for them since they are DB noobs and get the glazed over look of fear whenever I tell them to do anything with the database. I made it hands off for them thus far.
You might want to consider following the Rails-style database migration concept, whereby as you are developing you provide roll-forward and roll-back SQL statements that work as patches, allowing you to roll your database to any particular revision state that is required.
Of course, this is typically meant for dealing with schema changes only (i.e. you don't worry about revisioning data that might be dynamically populated into tables.). For configuration tables or similar tables that are basically static in content, you can certainly add migrations as well.
A Google search for "rails migrations for python" turned up a number of results, including the following tool:
http://pypi.python.org/pypi/simple-db-migrate
I would suggest to create a DEV MySQL server on any shared hosting. (No DB experience is required).
Allow remote access to this server. (again, no experience is required, everything could be done through Control Panel)
And you and your group of developers will have access to the database at any time from any place and from any device. (As long as you have internet connection)

Building up an online administration service, what database strategy should I go for

I'm building up an online (paid) service used for business administration purposes. The database is structured like so:
I have a contacts table filled with persons, contact info and the like. Then I have a few other tables holding information about payments, agreements and appointments. Also statistics like how much money was transferred this month, how many hours worth of appointments this month and the like.
I'm using MySQL (but could also go for MSSQL or some other service if necessary) and I had no formal training in any programming language whatsoever (yet).
I'm building a WPF application for acces to this database. Also planning on building an app so users can access their data and plan new appointments and register payments on the go.
I'm going to go for a login system to verify their right to login and use my service.
My question is about how to structure this. I'm not an SQL expert nor have I had any formal training in SQL or any other programming language. What I do know though is that my client-side app is almost out of the alpha stage.
So far I have come up with two ways to structure this.
1. Users get a seperate database.
My original idea was to give each user a seperate database, this makes it easier to provide people with statistics. Also it makes it easier to spread the workload through multiple, seperate servers. People would login to a master/main server, where their login information is stored, fetch their server info and programatically be 'redirected' to their own database. Spreading these databases also make it easier to provide individual back-ups to users.
The down-side of this is the sheer quantity of databases I'd have to manage. I'm planning on ending up with hundreds of thousands of users. Let's just say I want the system to be able to provide to an infinite amount of users.
2. Everything is stored in one database.
It's also possible to store everything in one database. This would make the database structure somewhat more complicated (while it also makes the whole a lot simpler). I'd have to add 'AND consumer_ID='" + MyID + "' to every query. (Which ofcourse is possible) and add a few tables to handle statistics per user.
It would be simpler to provide every user with the same database updates. Maintenance would be easier.
The down-side of this is that it makes it harder to spread the workload to seperate servers, I'd have to build something to make it possible that seperate servers mirror each other. Also I'd have to make sure that the workload is automatically divided between the servers, instead of simply going for: Fill server with X databases, then new server, fill, new etc.
I'm not in the luxury of hiring someone with any SQL training.
The most important thing for me now is that the system can be easily maintained while still being safe and reliable. I'm an amateur developer, going to college next year. I don't want to spend 50% of my time maintaining the database.
I think I got the major part of the details you might need, if you need anymore please ask for them.
I thank you in advance :)
Just go with solution 2. The downside of spreading the workload to many servers is fullfilled by "partitioning", look here for a starting point: http://dev.mysql.com/doc/refman/5.1/en/partitioning-overview.html
Partitioning would allow you for example to put all information of a table containing even IDs for consumers on the one, all other on the second server. Or whatever you want...
But i wouldn't start that complicated: do you need that now? It burdens you (either way) with such a big additional overhead! You can also look into the NoSQL database world for solutions that can be spread to as many servers as you want with low effort. You loose SQL and it's ACID features in the most cases; if you need those NoSQL is not an option.

What strategy/technology should I use for this kind of replication?

I am currently facing one problem which not yet figure out good solution, so hope to get some advice from you all.
My Problem as in the picture
Core Database is where all the clients connect to for managing live data which is really really big and busy all the time.
Feature Database is not used so often but it need some part of live data (maybe 5%) from the Core Database, But the request task to this server will take longer time and consume much resource.
What is my current solution:
I used database replication between Core Database & Feature Database, it works fine. But
the problem is that I waste a lot of disk space to store unwanted data.
(Filtering while replicate data is not work with my databases schema)
Using queueing system will not make data live on time as there are many request to Core Database.
Please suggest some idea if you have met this?
Thanks,
Pang
What you define is a classic data integration task. You can use any data integration tool to extract data from your core database and load into featured database. You can schedule your data integration jobs from real-time to any time-frame.
I used Talend in my mid-size (10GB) semi-scientific PostgreSQL database integration project. It worked beautifully.
You can also try SQL Server Integration Services (SSIS). This tool is very powerful as well. It works with all top-notch RDBMSs.
If all you're worrying about is disk space, I would stick with the solution you have right now. 100GB of disk space is less than a dollar, these days - for that money, you can't really afford to bring a new solution into the system.
Logically, there's also a case to be made for keeping the filtering in the same application - keeping the responsibility for knowing which records are relevant inside the app itself, rather than in some mysterious integration layer will reduce overall solution complexity. Only accept the additional complexity of a special integration layer if you really need to.

Synchronizing MS Access database file

I am developing a database with about 10 tables in it. Basically it will be used in 2 or 3 distant geographical locations (let's call them A,B and C). The desired work flow will be as follows:
A,B and C should always have the same database. So when A does any changes he should be able to send those changes over to B and C. Emailing the entire mdb file doesnt make sense since its 15+mb in size. So I would like to send the new additional records and changes only to B and C. The changes B and C make should also be reflected to the other repective parties. How can I do this?
I have a few ideas in mind but cont know how to implement it.
solution 'A' - export the data tables only into a xls file and email that. But the importing of the tables into the mdb file could be a bit complex right? and the xls is file will also become bigger and bigger with time.
solution 'B' - try extract just the changes and email only the new parts? (but how to extract just those)
Solution 'C' - find some way of syncing all users onto the same database(storage) location. I was thinking of a front/back end splitting solution by storing the tables in a shared drive in the parent company's server (which is also overseas). But the network connection between locations is very slow, and I dont know how much bandwidth is needed for this.
Any recomendations would be most welcome!
In regard to sources for information on replication, start with my Jet Replication Wiki.
But I would never recommend Jet replication for your scenario. The only environment where I currently recommend it (and I've been doing replicated apps since 1997 and still have several in production use) is for supporting laptop users who have to work with live data in the field disconnected from any network, and return to the home office and synch direct with the mother ship.
The easiest solutions with an Access application would be hosting the app on Windows Terminal Server/Citrix and the users would run it over a Remote Desktop Connection, or using Sharepoint. The Terminal Server/Citrix solution has no accomodation for disconnected users, but Sharepoint can accomodate offline usage and synch changes when connected. Access 2010 and Sharepoint 2010 provide a host of new features, including better schema design, the equivalent of triggers and greatly improved peformance for large Sharepoint lists, so it's a no-brainer to me that if you choose Sharepoint you'd want to use A2010 and Sharepoint 2010.
While it's possible to do what you want with Jet Replication, it requires a lot of setup on the server and client ends, and is relatively fragile (not in terms of data integrity if you're using indirect replication (as you should), but in terms of network reliability) -- there are too many moving parts and too many failure points.
Windows Terminal Server/Citrix is by far the simplest, with the fewest moving parts and completely centralized administration, and works very well for a relatively small investment.
Sharepoint is more complicated than WTS/Citrix, but is less complex and more centralized than a Jet Replication solution.
If it were me, I'd probably go with WTS/Citrix if there was no need for disconnected usage, but I'd be salivating over trying out A2010/Sharepoint 2010. If there was a need for disconnected usage, then I'd definitely go the Sharepoint route.
You want to use "Jet Replication". See
MSDN Search for jro at http://social.msdn.microsoft.com/Search/en-US?query=jro&ac=8
MSDN Search for access replication at http://social.msdn.microsoft.com/Search/en-US?query=access%20replication&ac=3
It's been some time since I did it, but the indirect method of replication worked well for me in a similar situation.
It takes something to set up. The documentation used to be appalling for it, but I found articles written by Michael Kaplan (aka Michka) that walked me through how to do it.
If your final environment is going to be fairly stable, then use Access the whole way. If not, then I'd urge you to take HansUp's advice and go with SQL Server or SharePoint.
Do note: if you're working in Access 2007 or later, replication is not directly supported, and you'll have to roll-your-own bits and pieces. If you're using an earlier installation, you'll be fine, but allow time for some head-scratching.

MySQL to SQL Server 2005

How can I convert a database from MySQL to MS SQL Server 2005?
You can use SSIS to copy over the table data to the new structure, but that is the easy part. Next you need to check all your sql code to make sure it will still work. This link can help see the differences between how each of the databases implements SQL
http://troels.arvin.dk/db/rdbms/
While you are converting, you might consider if now might not be the best time to do some refactoring as well.
The key piece of doing a conversion though is to make sure that everything is automated and reproducable. You are going to want to do this several times in dev before moving to prod data. And when you go to prod, you will need to take the database down for maintanance or you will end up having data added to the old database after you have moved the data from that table to the new one. You might even want to build the process to copy over the bulk of the rcords before the maintenance window and then during the maintenace window only move the new records or records which hae changed since the main move. This will depend on on how big your database is and how long you will have to be down to move the records. If it can be done in one step without being down for longer than your system can tolerate, it is better to do that. Another choice for a large database might be a client by client data movement, so that instead of being down for everyone for a full day, you are only down for a couple of hours per client. Again this depends on your database design and how possible that might be to set up and do.
Whatever you do, make sure the users are fully aware of what you are doing and when in advance so they can plan. Also avoid times of the month for the change that would coincide with a need for the database to be up and running - I'm thinking in terms of don't close the payroll database the day that payroll runs or the finanical database when end of the fiscal year tasks need to be done or monthly reports run, etc. I don't know if you have any of those issues, but is is good to consider if you do and work around those periods. If the users say, "No we can't do that on Friday", then find out why - they may have a really good reason why the day you choose to implement is bad for their own work schedules.
Here is a application that will do the conversion for you:
http://www.spectralcore.com/fullconvert/tutorials/convert-mysql-to-mssql-sql-server.php
This white paper by Microsoft may also help too:
http://technet.microsoft.com/en-us/library/cc966396.aspx