Access - Prevent Database Size Growth - ms-access

I am using a MS Access 2013 light application that was developed by a third party. I did not do the coding/design/management of the project, but I am responsible for implementation for my team. I also do not have the option of switching to another solution, but I do have access to the vba code so I can make tweaks to clean up their mess.
My problem is this:
Set up application with my data (a-ok).
Run the built in, fairly complex third party macro.
For most cases, things are just fine... but when running it on a
larger dataset the filesize of the Access file exceeds 2GB and the
entire operation fails.
On fail, the process has to be restarted. For the same data set,
it fails each and every time it reaches approximately 55% complete.
I am unable to complete my work because of this. :|
Solutions tried:
Compact and repair - Fine when it fully executes, but the issue is that it reaches 2GB while the macro is running and cannot be interrupted.
Splitting the database - Splits OK, but doesn't fix issue.
Attempting to trigger a compact and repair inside the macro during the loop - Fails because Access cannot lock the database.
Desired solution:
A way to prevent the file growth/bloat while the macro is running. Either through a compartmentalization of the process or through some other wizardry I am unaware of at this time.
A solution that does not require extensive reconfiguration of the underlying code. I can deal with inefficient - so long as I can fix this issue for this one instance (1 critical error in 44 runs of different data in the database.
Any help?

I would recommend compact on close for easy dirty solution
On the File tab, click Options.
In the Access Options dialog box, click Current Database.
Under Application Options, select the Compact on Close check box.
ADVANCED SOLUTION
The other solution requires splitting the database.
After splitting you have another option.
Use the front with a sql server (check which version is suitable for you, I think the lite (free) version is enough to start with if you don't expect a hige amount of data)
Split the database
Install sql server (mysql or sql server express edition)
Create all tables in the sql server
Link the front to the sql server

I think davejal pretty much nailed this one.
If you have a handful of really large tables, you can put those into another Access DB, and make a link to those.
https://support.office.com/en-us/article/Import-or-link-to-data-in-another-Access-database-095ab408-89c7-45b3-aac2-58036e45fcf6
The 2GB limit is per DB.
Or, upgrade to SQL Server Express for free, and use Access as a front end to that SQL Server backend.
SQL-Server Backend, MS Access Frontend: Connection
Here's a link to get SQL Server Express.
https://www.microsoft.com/en-us/download/details.aspx?id=42299

Related

MS Access handle ODBC fault at opening database

I have an MS Access frontend which is using a SQL database as backend. To connect, I use a ODBC connection and have created the needed entries in "ODBC Data Sources (32 Bit)"
When I will give the database to others, they will need to create this data source. I have a batch file for this so they can just run it.
If they do not run it, they will get a fault like "ODBC connection to XY failed". How could I change this error or at least write a second Message Box afterwards where I can tell them "run the Batch file XY to connect"?
When I will give the database to others, they will need to create this data source.
that is your first bad mistake and assumption. You MOST certainly do NOT want to deploy your applcation that way.
The way you deploy?
You take your accDB file, and link the tables to sql server. And you link using what is called DSN-less connections. Such connections do NOT require ANY data source to be setup on each work station.
So, ok, now you linked the tables to sql server (the production one - you probably were developing local on your developer PC and using a local copy of sql server express edition.
So, so now you link the tables to THEIR server, and then you now compile the accDB down to compiled executable version of Access - a accDE.
You are now free to deploy this "application" to any and all workstations for that company - and they do not have to re-link tables, do not have to setup a data source, and in fact they don't have to do anything, but simply run/launch the applcation.
How to make and get a dns-less connection?
Well, the MOST simple way is to ALWAYS, but ALWAYS ALWAYS create the linked tables using a FILE dsn. In fact, when you launch the ODBC connection manager from Access, the default is to use a FILE dsn. Never, and do not use "system" or "user" dsn.
If you link the tables using that FILE dsn, then Access converts them to DNS-less connections. At that point, you can even delete the DSN you created - access 100% ignores the DSN, and you don't need it anymore.
Next up:
If you been using nc 17 or nc 11-18 drivers? then yes, each workstation MUST have that same driver installed. Or, you can use the older "legacy" sql driver. But be careful - the legacy driver does not support datetime2 columns data types.
So, MAKE double, triple, quadrible sure that you not using data types and columns that require the newer drivers.
Right now, for some of the larger sites, we STILL use the older "legacy" driver, since that driver is installed at the OS level, and has been installed on every copy of windows going back to windows xp - in fact windows 98SE edition started shipping that driver. So you can 100% assume that the legacy sql driver is and will be installed on each workstation.
And by using + adopting dns-less connections, then no setup of the connection on each workstation is required. As long as those work stations are on the same network to sql server when YOU linked the tables, then they are good to go.
Now, on some sites, we actually can't even be on site, and we can't even pre-link to their sql server. So, what we do is on applcation startup, we check the current link for a linked table, and if it does not match a little external text file we ALSO included when setting up the work station, then we use VBA code to re-link the tables on startup. But once again, re-link of tables in VBA is easy, and again does NOT require ANY KIND of "dsn" or odbc setup on each work station.
And in fact, another way we used to do this is have a table in the front end, and it had one record, and that record had the connection string. So, right before deployment, we just edit that one record table in the front end to have the correct connection to their sql server. And once again, on startup , we check a linked table, and see if the connection strings match, if they don't, then we run the VBA re-link code, and once again, zero configuration and zero need exists to setup anything at all on each work station.
So, as a general rule, every dog, frog, insect that deployed access applications? We setup some re-link code and check that link on startup. In fact most developers have done this even without sql server - and even when using a access back end, then re-link code is included to resolve this issue.
but, be it linking to a access back end, or sql server back end? Some time of link check and system is assumed to have been cooked up by you, and this code will run on startup to check if the linked tables are pointing to the correct location.
But, be the back end oracle, sql server or whatever? You can create what is called dsn-less linked tables. As noted, you can use VBA to do this, or in fact you can use the linked table manager ---- as LONG AS you use a FILE dsn when you linked, then access coverts that to dns-less for you, and you be good to go.
so, in effect, you don't have to test/check/know for a odbc fail, since you checked the correct connection string on startup.
However, there is a way to trap, and check for a odbc failure, and this involves using a DIFFERENT way to connect, since we all know that if you have a odbc fail, you are duck soup (you have to exit Access, and there is NO KNOWN way around this issue - (well, except for testing if you can connect, and you do NOT use a linked table - since as noted once a odbc connect error triggers, it is game over.
The way you do this "alternate" test is like this:
Function TestLogin(strCon As String) As Boolean
On Error GoTo TestError
Dim dbs As DAO.Database
Dim qdf As DAO.QueryDef
Set dbs = CurrentDb()
Set qdf = dbs.CreateQueryDef("")
qdf.connect = strCon
qdf.ReturnsRecords = False
'Any VALID SQL statement that runs on server will work below.
' this does assume user has enough rights to query built in
' system tables
qdf.sql = "SELECT 1 "
qdf.Execute
TestLogin = True
Exit Function
TestError:
TestLogin = False
Exit Function
End Function
So, above does not hit or use a linked table. And a HUGE bonus of above? Once you execute that above valid logon, then any and all linked tables will work - AND WILL WORK without even having included the user/password in that connection string for the linked table. This is in have a huge bonus in terms of security, since now you don't have to include the user/password in the linked tables which of course exposes the user/password in plane text for all users that could look and see and find the sql user/password used.
In fact, what this means is that you can link your tables, but NOT include the user/password when you link the tables!!! - this is a HUGE security hole plugged and fixed when you do this.
So, once a valid logon has occurred (such as above), then any and all linked tables will now work, and work without even having included the user/passwords in those linked table connection strings.
As noted, the other big bonus is that you can use the above code to test for a valid connection, and avoid that dreaded "odbc" error, since as noted, if a odbc connection error is EVER triggered at any point in time, you MUST exit the applcation - no other way out.
However, it should be noted, that if you ever are going to use a wi-fi connection, or say cloud based sql server say running on Azure?
In that case, often with wi-fi, or a cloud based edition of sql server, then of course such connections over the internet are prone to minor and frequent dis-connects.
ODBC was developed long before the internet, and long before people would do things like connect access to some cloud based sql server running, and using a connection over the internet. But, if this turns out to be your use case, and deployment case?
Then you have to bite the bullet and ASSUME and ENSURE that you now adopt the nc 11-18 drivers. (I would go with nc17). These new drivers are now "internet" aware, and they are able to gracefully handle minor dis-conects, and in fact automatic re-cover and re-connect.
So, if you are ever going to use wi-fi, or connect to cloud based server? Then yes, you have to link the tables using say nc17 newer drivers, and you ALSO MUST THEN ensure that the same driver version you linked tables with is to be installed on each work station. You still don't have to setup any dsn connection and all that jazz - but you do have to ensure that the driver you used is ALSO installed on those work stations.
As noted, for larger deployments, we thus use the standard "legacy" sql driver, as it would be too painful to go to all work stations and install this driver.
However, we had one location, and for months they were experience in odbc connection failure. We had them replace a router, and even a network card on a server - but the problem still remained.
We suspect that some workstations had aggressive power management turned on, newer windows 10-11 will often put the network card to sleep, and thus when using access, we were seeing odbc errors. So, for that company, we had them install nc17, and linked access to sql server using that driver, and the problem went away (because those newer drivers now have built in re-connect ability - this is a relative new feature of ODBC, and one that legacy systems and drivers don't have.

Splitting MS Access to Front End and Backend but Backend in a different Computer

We have a MS Access Database with millions rows and I need to Split the Front End and Back End. Can the Backend be stored on a Windows Server or a high performance computer? So the developers can connect to the Central Backend Server and DB work is done on the Server and the front end developers using their desktop and not the Backend DB machine.
Used the SPLIT Option in MS ACCESS and works well.
Yes you can but the bottle neck here could be the network, you don't need much power for a split Access database. Access moves a lot of data because of its design. If you query a table then Access will read the whole table from the server to filter it later on the client.
If your frontend is in early state of development consider using sql server (or any database server with odbc) and runing every query in passtrough mode.
Also think in a easy way to distribute the frontend to clients when you make changes.

SSIS Mysql best practices

We are in the process of moving our backend from ms sql server to mysql. Actually we currently use a couple mysql servers, but mostly ms sql server. I mention this because we are not totally new to mysql. Each day we do a lot of ETL to keep our backend in sync with a legacy system. We move a lot of data and working with sql server has been so much easier than working with mysql for ETL. I know SSIS is MS, but still it has been a headache.
We are using sql server 2012 and BIDS 2010. It has been a struggle to move mysql data at the same rate as ms sql data. We are mainly dealing with innodb tables in mysql. To summarize I have been using the mysql ODBC connector and the ODBC destination in SSIS. The first step is to turn autocommit off on imports. Even with that setting off I can see in package execution that the data source ends up waiting on the destination. It gets about 40,000 rows ahead and waits.
Next I export the data to a text file and then import using a sql task and the INFILE command. This gives pretty good performance, but at the expense of more moving parts. I've had a couple issues with this approach, but it does work and perform well.
Lastly I tried a 3rd party SSIS component from Devart. It creates custom mysql source and destination components. The performance isn't as good as INFILE, but it's not bad and it makes the package simple like when dealing with sql server... a data source and a data destination. No messing with auto commits, exports, INFILE, etc. However I can't use the connections to do other tasks like truncate tables and stuff. So I still have my ODBC connection to do those tasks. I'm going to ask Devart about this.
Right now it looks like Devart is going to be a nice balance. If I absolutely need the performance I have the INFILE method.
I also tried the mysql net connector and could not get that to work at all. I'm running on Windows 7 64bit with Sql Server 2012 64bit. Basically everything I need in BIDS runs in 32bit so I'm guessing this part of issues.
My question is what are others doing when it comes to moving mysql data with SSIS? It has been such a hassle. It would be nice to get some input on what others are doing. What methods are you using? Are you using 3rd party components? Is there a better/dedicated place to discuss SSIS and mysql?

How to access SQL Server Publishing Wizard 1.4

I've had a big problem in replicating a simple SQL Server 2008 R2 Express database for use on a development server. I thought I had it sorted but it turns out that each table has lost it's 'Identity' value somewhere along the line, and it's not possible to add those back in now. This is pretty much useless. So I'm back at square 1; having to get a copy of a MSSQL database plus data from a web server to another web server.
I've read that SQL Server Publishing Wizard does this, and maintains crucial things like identity settings etc. Trouble is, I'm working with SQL Server 2008 R2 Express and I can't actually seem to find a way to access that program anywhere - even though when I go to 'control panel > remove programs' it's in there. When I try to find it on my system (e.g. via start > find programs / files) it's nowhere.
Does anyone know how to access this program, and will it do what I need?
Thanks!
Sure thing, thanks Michael. So the solution was to connect to the database through VWD 2010 Express, which has the options required to do this. There are actually some really great third party tools which do database migrations from one system to another detailed here: http://erikej.blogspot.co.uk/2009/04/sql-compact-3rd-party-tools.html. The ones on this page are geared specifically at SQLCE migrations, but several of the tools also support other full SQL versions too.

Timeout issue during data transfer from MySQL to SQL Server using SSIS

I am trying to transfer 67,714,854 rows from MySQL to SQL Server using SSIS. The package times out after transferring 14,282,990 rows. I changed the time out property to 0 also, but that didn't help.
How do I resolve this issue?
I found a hacky solution to it. And that is having a limit at the end of your query. I was facing the same problem with ADO .NET connection to connect to MySQL. Although it doesn't solve the problem. It atleast get the work done.
SSIS: 2208 R2.
MySQL: 5.0
On your OLE DB Destination connection, what "Data access mode" have you selected. If you have selected "Table or view - fast load" (this is the default), then there will be a "Maximum insert commit size" specified. You can try one of two things: 1) change the commit size to a larger number; or 2) try the other data access mode "Table or vew". Since you're getting a timeout error, I suspect that option 1 may not help (since you're already getting a timeout with a smaller value), so try option 2. Although that will likely result in slower performance, it may actually complete successfully. (You could then try #Siva's approach and split the output across multiple destinations to improve performance).
(Note: I'm referring to what's available in SQL Server 2008 R2, if you're using previous versions, it may be slightly different)
If none of the above work, you could also try to create a new SSIS package from scratch by running the SQL Server Import Wizard (right-click on your database in SQL Server Management Studio and select Tasks/Import Data. Follow the wizard screens and near the end make sure you check the box to Save the SSIS package, and choose a file location to save it to. Typically, the resulting SSIS package will be a functional package (and then you can also make whatever further modifications you like to it).
Does MySQL give you the error or are you using PHP (or another language) to transfer the data and does that timeout? In the case of the latter, in PHP you can set the script timeout to infinite using this:
set_time_limit(0);
Either way, based on the information given, I'm not sure what type of database it is, but typically I would set up a cron script to transfer the data bit by bit in order to keep the load at an acceptable level. Please give more information...