so i am trying to pass time during the end of the world learning a bit of access (i already know a decent bit of sql). I am particularly interested in pulling query results made in access to use them in excel.
So far i have managed to create my database in access, populate it with data imported from excel and create the queries i need to use in excel. I have also managed to get the queries data in excel using MS Query (from "From other sources" menu because i would to query the query with parameters) and it is all peachy. It fails when i upload the access database on my work network. For some reason i can access the tables but as soon as i try to load my access query in MS query, it will just freeze to death.
I tried letting it run 20 minutes to no avail. I was wondering if anyone ever had that kind of issue. i tried several things for example i can access queries from other online database, the difference is that database is less heavy (the failing one is 100 mb) but considering i m a beginner i am not sure what else to do.
Thank you in advance
edit : i left it running and apparently it works, it's just that it takes a gazillion years : it is a sum query that runs on a 900k rows table
SELECT main.yearData, main.period, main.week, main.articleId, listArticles.brand, listArticles.description, groupToType.type, listArticles.model, listArticles.color, Sum(main.sales) AS totalSold, Sum(main.stock) AS totalStocks
FROM groupToType INNER JOIN (listArticles INNER JOIN main ON listArticles.articleId = main.articleId) ON groupToType.productGroup = listArticles.productGroup
GROUP BY main.yearData, main.period, main.week, main.articleId, listArticles.brand, listArticles.description, groupToType.type, listArticles.model, listArticles.color;
Is there anything i can do to make it run faster? i cannot wait 15 minutes every time to get the update info.7
Thank you again !
Related
I'm a java dev who uses Mysql Workbench as a database client and IntelliJ IDEA as an IDE. Every day I do SQL queries to the database from 5 up to 50 times a day.
Is there a convenient way to save and re-run frequently used queries in Mysql Workbench/IntelliJ IDEA so that I can:
avoid typing a full query which has already been used again
smoothly access a list of queries I've already used (e.g by auto-completion)
If there is no way to do it using Mysql Workbench / IDEA, could you please advise any good tools providing this functionality?
Thanks!
Create Stored Procedures, one per query (or sequence of queries). Give them short names (to avoid needing auto-completion).
For example, to find out how many rows in table foo (SELECT COUNT(*) FROM foo;).
One-time setup:
DELIMITER //
CREATE PROCEDURE foo_ct
BEGIN;
SELECT COUNT(*) FROM foo;
END //
DELIMITER ;
Usage:
CALL foo_ct();
You can pass arguments in in order to make minor variations. Passing in a table name is somewhat complex, but numbers of dates, etc, are practical and probably easy.
If you have installed SQLyog for your mysql then you can use Favorites menu option in which you can save your query and in one click it will automatically writes the saved query on Query Editor.
The previous answers are correct - depending on the version of the Query Browser they are either called Favorites or Snippets - the problem being you can't create sub-folders to group them. And keeping tabs open is an option - but sometimes the browser 'dies' - and you're back to ground 0. So the obvious solution I came up with - create a database table! I have a few 'metadata' fields for descriptions - the project a query is associated to; problem the query solves; and the actual query.
You could keep your query library in an SQL file and load that when WB opens (it's automatically opened when you restart WB and that file was open on last close). When you want to run a specific query place the caret in it's text and press Ctrl+Enter (Cmd+Enter on Mac) to run only this query. The organization of that SQL file is totally up to you. You have more freedom than any "favorites" solution can give you. You can even have more than one file with grouped statements.
Additionally, MySQL Workbench has a query history (see the Output Tab), which is saved to disk, so you can return to a query even month's after you wrote it.
We have a split MS Access database. The front end was distributed among us. And some users are able to run queries and edit query results without a hitch. But a few others are not able to edit the query results. The query runs fine for them and they are able to see the results in MS Access. But editing any field is not possible.
Could anyone advise me on how this can have happened. I have primary keys on all the tables on which the queries are based. And there are no joins in my queries. Each query runs on a single table.
Thank you in advance for your time and help.
With regards,
Manus
I got the answer. The back end database was on a network drive where some of the users had write access but not all. Silly of me not to look there first. So basically the users who couldn't edit were the ones with no write access on the shared drive in which the back end of the database was kept.
I'm currently using an execute sql task in SSDT that queries AD. Here's what I have:
< LDAP://ou=groups,dc=blahblah, dc=com>;
(|(cn=*_BI_*)(cn=*.BI.*));
cn
The results eventually write to a table. In the table, I'm getting results for names such as AS_BI_Core, IS_BI_App, and anything that has an underscore. I'm not getting any results for the .BI. cn which is what confuses me. If I do a custom search using active directory users and computers for (|(cn=.BI.)) I get 6 results that are not showing up in the table that I'm writing to. Does anyone know why this would be?
Am trying to import some selective data and create a table in MS Access db 2002 from a linked table. For some odd reason the performance became really bad all of a sudden when importing the data.
I tried googleing and tried various methods like reparing/compacting the db, Changing the SubDataSheet Name to [None] from [Auto] but either one worked.
Can any one please give me some examples to increase the performance of linked tables.
Thank you.
Rather than selecting information from a linked table and trying to make a local table, when using a database server like MS MSQL, you would be better to create a "Pass Through Query" to do the select work on the server side, and then carry out a simple select * on this pass through to get your data in to a local table. This will give the best results if your first select statement is complex and takes a while for Access to run on a linked table, if that is not the issue then you will need to look at your network speed which connects you to your MS SQL server.
I have an SSIS package set up to export data from a SQL Server 2008 R2 table to a MySQL version of that table. The package executes however, I am getting about 1% of the rows failing to be exported.
My source connection uses the SQL statement
SELECT * FROM Table1
all of the columns are integers. An example of a row which is exported successfully is
2169,2680, 3532,NULL, 2169
compared to a row which fails
2168,2679,3532,NULL, 2168
virtually nothing different that I can ascertain.
Notably, if I change the source query to only attempt the transfer of a single failing row - ie.
SELECT * FROM Table1 WHERE ID = 2168
then the record is exported fine - it is only when part of a select which returns multiple rows that it fails. The same rows fail the export each time. I have redirected error rows to a text file which displays a -1071610801 error for the failing rows. This would apparently translate to:-
DTS_E_ADODESTERRORUPDATEROW: "An error has occurred while sending this row to destination data source."
which doesn't really add a great deal to my understanding of the issue!
I am wondering if there is a locking issue or something preventing given rows from being fetched or inserted correctly but if anyone has any ideas or suggestions on what might be causing this or even better how to go about resolving it they would be greatly appreciated. I am currently at a total loss...
Try to setup longer timeout (1 day) ot the mysql (ADO.NET) destination.
Well after much head scratching and attempting every work around that I could come up with I have finally found a solution for this.
In the end I switched out the MySQL connector for a different driver produced by devArt -dotConnect for MySql and, with a few minor exceptions (which I think I can resolve) all of my data is now exporting without error.
The driver is a paid for product unfortunately but in the end I'd have taken out a new mortgage to see all those tasks go green!