This question concerns the use of Connector\Net, MySQL, and the visual studio integration of the typed data set designer. As a beginner I have almost entirely default settings, and am using InnoDB for tables.
It appears that on certain tables the designer slows to a grind on adding new queries to the adapter for me. Looking in the Workbench administrator it appears to be running through a loop of similar queries fetching the schema of the tables, checking permissions, deleting tmp tables, etc. I presume it is trying to validate the syntax of the query that I'm adding and get details to implement the XSD spec.
I am not sure how to proceed in debugging this further. The slowdown becomes essentially equivalent to a total freeze. If I had to take a total guess I'd say there's looping going on rather than a slow sequence of events. I'd say a timeout exception gets swallowed, and a loop repeats. If you shutdown the service, the logon creds dialogue will pop up in the designer but you can't cancel out of it without shutting down the VS process which seems sloppy.
Any ideas? Is this functionality being used by other people or am I in no man's land?
Related
I am new to a new role at a company where they are using MS Access with a MySQL db which is running in server that's physically in our office behind our private network. I have been hired to develop an entire new application to bring the company up to modern standards. As we move features/modules to the new Angular/NodeJs App I am writing, users still need to utilize the UI provided by MS Access to the new production database that will be on AWS Lightsail.
However, when I change the configurations of Ms Access, OBDC connections to point to the AWS Lightsail MySQL Db, everything(reports especially) in the MS Access UI becomes slower than when it was being pointed to the MySQL Db here in office in-network.
I am going to the "Linked Table Manager" and changing the "Connection String".
Somewhere I read I should make sure SSLMODE is disabled to remove any performance issues.
DSN=AWS_Dev;DATABASE=ECSDataTables;PORT=3306;SERVER=IP_ADDRESS;SSLMODE=DISABLED;
I went through the normal "ODBC Data Source Administrator" in Windows and added the MySQL AWS host, user/pass as normal.
I have done extensive research and have found several sources, but none are really helping.
I have been asked not to spend too much time trying to fix/optimize anything in MS Access as my focus should be on the new application, but it's hard to believe that a simple switch of MySQL database can have such impact. In the new Angular/NodeJs application, everything runs very fast, so I know it's not the AWS MySQL db or anything.
Am I missing something, any configurations I should be doing in Ms Access? I have not used VB in about a decade, so I am hoping something can be done without the need of too much technical background in this matter.
Thank You.
Well, the issue is that your local area network (LAN) is about 10 times, or more faster then your internet connection.
Your low cost office network is very likely to be a 1 gig bit network. (100 base T is rare).
However, your internet high speed connection is likely say 10 mbits. So, you going from 1000 to 10 - that is 100 times slower. So, 3 seconds now becomes 300 seconds.
I mean, with such a slower connection speed, then no surprise should exist here.
What you can do is for any report that is a complex join of client side sql is convert the sql query to a server side view, link to that view. Now use that view as the base source for the report. And of course existing VBA filers that you always use (right???) to launch a report will now only pull the data it needs down the network pipe. Access reports (or forms) only pull down what you ask - not the whole table. So, any filter you have (use the where clause of the open report command) will be respected. So, you either have to pull less data, or simply find something with a similar speed rating as your local area network (and such high speed internet is rare).
The LAN vs WAN concept and speed issue is outlined in this article:
http://www.kallal.ca//Wan/Wans.html
While the above article is very old, the speed differences of the internet are about 10x faster today, but so is the local area that's gone from 100 baseT to 1 gig bit base.
So, things are slower because you are working with a VASTLY slower connection speed. Slower is slower!!!
Edit
While as noted, access will only pull what you ask, the case where access client does a poor job is sql queries that involve multiple tables - often the client will mess up what it sends server side. As noted, the solution in this case is to adopt views server side. This means you move the client side query that drives the report to a view, and link to that view. You not gain much performance for a single table query, but for any report based on complex (multi-table joins), then using a view will force the sql and "join work" to occur sql server side, and this can result in huge performance gains.
Well this is a case where limited knowledge just produces worst results than the expected ones.
Over the years top DBAs just "hate" Ms Access... they just see only problems,issues you name it ...the end sentence is "switch to a real Database engine".
Well this has created a faulty assumption that MsSQL, MySQL,Oracle, PostGreSQL and the rest database engines are somewhat a "magic pill"...you just switch the BE to one of the above DBE and all your problems will get resolved...just like that.
DBE --Database Engine (if you would like to call somewhat else feel free)
WRONG
Ms Access follows a different philosophy from the DBE and it does its job damn well given all its shortcoming and the major fact that is a file based DBE.
Switching to another DBE will give amazing performance IF and ONLY IF you respect the fact that you are not working with Access ....just don't treat e.g. MySQL as your file repository and DON'T just link the tables and expect everything to go well...
Want to keep blaming Access ...just switch over to another platform (.NET,PHP,Js , Java...make your pick) ...and do a small application that pulls ALL of your data in a single go like you do with Access . it will certainly crash or go Not responding...
So stop blaming Access ...start reading on how to make the most of two worlds and i am pretty sure that the results will amaze you....but again i must stress out that this is not a "magic pill" solution ...it involves a LOT of work ...planning,data manipulation,normalization,code changes and above all change of philosophy..
I would recommend starting the journey by picking this book : https://www.amazon.com/Microsoft-Access-Developers-Guide-Server/dp/0672319446 ( i don't want complains about its Old and MsSQL ...just read first and complain later)
Also take a look at an old benchmark alike video i made some years ago : https://www.linkedin.com/posts/tsgiannis_a-small-demo-of-connecting-ms-access-fe-to-activity-6392696633531858944-dsuU
Last but not least....years ago i was making some tests to see what the "magic pill" would do to my company's applications....the simplest test of all...
A simple table with few fields but with around 8 millions records...just display it
Access BE (local)--> It would run in 1-2 seconds...that's fast
Access BE (Network share)--> It would run in a few seconds...not so fast but it was usable
MSSQL BE (linked table)--> somethimes it get the results sometimes it wouldn't....slow...really slow ..like you make a coffee and go for a small walk.
MySQL BE (linked table)--> it never finished...timeout of "Not Responding"
PostGreSQL BE (linked table)--> it never finished...timeout of "Not Responding"
So stop blaming Access...start working and get amazed....
I am optimising a large SP in Sql Server 2008 which uses a lot of dynamic Sql. It is a query which searches the database with a number of optional parameters and short of coding for every possible combination of parameters dynamic sql has proven to be the most efficient method of executing this. The sql striung is built including parameters and then passed to sp_executesql with the param list. When running this in SSMS with any combination of parameters it runs very quickly (<1s) and returns results. When running from a windows forms application however, it sometimes takes considerably longer.
I have read that the difference in the ARITHABORT option can cause this (ON as default in SSMS and OFF in ADO) however I am unsure as to whether turning this on fixes the issue or whether it masks it? Does the difference in settings make a difference to the query itself or does it just mean that Sql Server will use different cached execution plans? If so should clearing the cache and statistics reset the playing field?
I have also read differing points of view on the OPTION RECOMPILE setting. My understanding is that when sp_executesql is used with a parameter list then each combination of parameters will produce an execution plane however as the possible combinations of parameters are finite this will result in optimised queries. Other sources say it should be set to ON at the start of any SP that uses dynamic sql.
I realise that different situations require different settings however I am looking to understand these further before trying the arbritraily on my very busy 24x7 production server. Apologies for the ramblings, I guess my question boils down to:
What causes sql to run differently in SSMS and Window Forms?
If it is ARITHABORT then is this an issue related to execution plans or should I turn it on as a server default?
What is the optimal way to run queries with dynamic sql?
Run a trace in SQL Profiler to see what's actually being submitted to the server. Of course, you need to be aware of the impact of traces on production servers. In my experience very short traces that are limited to a small set are not a big problem for servers that don't have a very high transactions per second load. Also, you can run a trace server-side which reduces its impact so that's an option for you.
Once you see what's actually being submitted to the database this may help you understand the problem. For example, sometimes DB libraries prepare statements (getting a handle to a sort of temporary stored proc) but this can be costly if it is done for each issuance of the query, plus it's not needed with sp_executesql. Anyway, there's no way of knowing for sure whether it will be helpful until you try it.
I am researching the possibility to log all the changes made to a MySQL database including DDL statements that may occur and use that information so it can be synchronized with a remote database.
The application itself is written in C# so the best synchronization technology that I have seen so far to be available is Microsoft Sync Framework. This framework itself proposes a solution to track changes made to the DB by adding triggers and additional tables to store the deleted rows.
This does not seem to be a great idea for my case since it involves changing the schema of a standard DB used by more than 4 products. This method is also effectively doubling the number of tables (by adding a new table for the deleted rows of each table) which also does not feel to good.
On the other side MySQL has this great thing binlog, which tracks all the changes and can also use the so called mixed mode to track statements in most cases (so they can be executed again on the remote DB to replicate data) and the raw data when a non-deterministic function is called (like NOW()) so the data updated is the same on both places.
Also there seems to be 2 standard ways to retrieve this data:
1) The mysqlbinlog utility
2) Calling 'SHOW BINLOG EVENTS'
Option 2 seems the better to me since it does not require calling another external application, and running an application on the DB machine, BUT it does not include the actual data for the logged ROW format statements (only stuff like: table_id: 47 flags: STMT_END_F which tells me nothing).
So finally my questions are:
Is there a better way to track the changes made to a MySQL db without changing the whole structure and adding a ton of triggers and tables? I can change the product to log it's changes too but then we have to change all the products using this db to be sure we log everything ... and I think it's almost impossible to convince everyone.
Can I get all the information about the changes made using SHOW BINLOG EVENTS? Including the ROW data.
P.S. I researched MySQL Proxy too, but the problem in logging statements in all cases is that the actual data in non deterministic functions is not included.
Option 3 would be to parse the bin log yourself from within your app - that way you get total control of how often you check etc, and you can see all the statements with the actual values used.
I am looking at databases for a home project (ASP.NET MVC) which I might host eventually. After reading a similar question here on Stack Overflow I have decided to go with MySQL.
However, the easy of use & deployment of SQLite is tempting, and I would like to confirm my reasons before I write it off completely.
My goal is to maintain user status messages (like Twitter). This would mean mostly a single table with user-id/status-message couples. Read / Insert / Delete operation for status message. No modification is necessary.
After reading the following paragraph I have decided that SQLite can't work for me. I DO have a simple database, but since ALL my transaction work with the SAME table I might face some problems.
SQLite uses reader/writer locks on the entire database file. That means if any process is reading from any part of the database, all other processes are prevented from writing any other part of the database. Similarly, if any one process is writing to the database, all other processes are prevented from reading any other part of the database.
Is my understanding naive? Would SQLite work fine for me? Also does MySQL offer something that SQLite wouldn't when working with ASP.NET MVC? Ease of development in VS maybe?
If you're willing to wait half a month, the next SQLite release intends to support write-ahead logging, which should allow for more write concurrency.
I've been unable to get even the simple concurrency SQLite claims to support to work - even after asking on SO a couple of times.
Edit
Since I wrote the above, I have been able to get concurrent writes and reads to work with SQLite. It appears I was not properly disposing of NHibernate sessions - putting Using blocks around all code that created sessions solved the problem.
/Edit
But it's probably fine for your application, especially with the Write-ahead Logging that user380361 mentions.
Small footprint, single file installation, fast, works well with NHibernate, free, public domain - a very nice product in almost all respects!
I wish to migrate the database of a legacy web app from SQL Server to MySQL. What are the limitations of MySQL that I must look out for ? And what all items would be part of a comprehensive checklist before jumping into actually modifying the code ?
First thing I would check is the data types - the exact definition of datatypes varies from database to database. I would create a mapping list that tellme what to map each of the datatypes to. That will help in building the new tables. I would also check for data tables or columns that are not being used now. No point in migrating them. Do the same with functions, job, sps, etc. Now is the time to clean out the junk.
How are you accessing the data through sps or dynamic queries from the database? Check each query by running it aganst a new dev database and make sure they still work. Again there are differences between how the two flavors of SQl work. I've not used my sql so I'm not sure what some of the common failure points are. While you are at it you might want to time new queries and see if they can be optimized. Optimization also varies from database to database and while you are at it, there are probably some poorly performing queries right now that you can fix as part of the migration.
User defined functions will need to be looked at as well. Don't forget these if you are doing this.
Don't forget scheduled jobs, these will need to be checkd and recreated in myslq as well.
Are you importing any data ona regular schedule? All imports will have to be rewritten.
Key to everything is to use a test database and test, test, test. Test everything especially quarterly or annual reports or jobs that you might forget.
Another thing you want to do is do everything through scripts that are version controlled. Do not move to production until you can run all the scripts in order on dev with no failures.
One thing I forgot, make sure the dev database you are running the migration from (the sql server database) is updated from production immediately before each test run. Hate to have something fail on prod because you were testing against outdated records.
Your client code is almost certain to be the most complex part to modify. Unless your application has a very high quality test suite, you will end up having to do a lot of testing. You can't rely on anything working the same, even things which you might expect to.
Yes, things in the database itself will need to change, but the client code is where the main action is, it will need heaps of work and rigorous testing.
Forget migrating the data, that is the last thing which should be on your mind; the database schema can probably be converted without too much difficulty; other database objects (SPs, views etc) could cause issues, but the client code is where the focus of the problems will be.
Almost every routine which executes a database query will need to be changed, but absolutely all of them will need to be tested. This will be nontrivial.
I am currently looking at migrating our application's main database from MySQL 4.1 to 5, that is much less of a difference, but it will still be a very, very large task.