When I run the following VBA it does what I need.
DoCmd.RunSQL "DELETE * FROM tmp_tbl_order_short"
I then repopulate the table but when I run a query against that data it is dog slow. Reason is that the index is gone.
Can I recreate an index via VBA or is there another way of clearing the table without removing the index?
What do you mean by 'index is gone'?? If field was indexed, then when you add new data, it will re-index. You are not deleting table and recreating it, which you can do, if needed, with following SQL statements that you can execute:
DROP TABLE tempTable;
CREATE TABLE tempTable(all fields)
and Follow by creating index:
CREATE INDEX myTempIndex
ON tempTable (FieldToIndex)
In my opinion, something else is making your query go slower, perhaps wrong field indexed etc.
Ok ... my bad. During development of this process the query that repopulated the table was a 'make table' query which was fine until I realized an index was required. The original VBA sequence cleared the table and then I THOUGHT appended the new data.
So the 'DoCmd.RunSQL "DELETE * FROM tmp_tbl_order_short"' was not deleting the index, the make table query was.
Related
I have a schema that is used to archive a data set on a daily basis. Some of the analysis needs to look back, so to optimise things I need to create a couple of indexes on each table. These would be seperate (I'm not trying to cross index or anything) just a simple non-unique index, but on each table in the schema.
The archive has already been building for over a year, so we have some 400 - 500 tables, making a manual ALTER query on each tablea bit too time consuming.
I could write a php script to do it, but wondered if there was a more elegant solution with a single query or transaction?
TIA
I have copied #Shadow's answer in the comments above here to show it as the answer:
Well, the alter table and add index sections will be string constants as you have to generate the alter table statements and then execute the alter table statements you generated in the first step. See an example here: stackoverflow.com/a/44527818/5389997
I'm experiencing huge performance problem in one legacy application.
There is a search form where user can search records with given value.
A result row contains 10 columns. Then a SP returns any row which contains in any column that value.
This SP uses 8 Tables and some of them have about million records. Every minute I get a new record. This SP conducts paging as well.
Execution of this SP takes sometimes around 40 seconds.
What I did was, I created a new table and put there all records by using a query from this SP, but without conditions.
When there is a new update or update in one of source table I use a trigger and update this new "cache" table.
Now waiting for results from this new table takes only 1-3 seconds.
Has someone experience with something like this?
One of my colleagues said I better use view, but then every time I will be making JOINS.
What do you think? Is there another way?
Often times temporary tables can help you resolve performance issues. One approach might be to collect only the records that you need to consider into temporary tables and then create your final select statement from the temporary tables joined to any other tables that you're not filtering.
As an example, let's say one of the fields you are searching for is field1 in table1. Start by inserting into table #table1 only records that have the value of field1 you are looking for:
select PrimaryKeyTable1, Field1, Field2, Field3, etc...
into #table1
from table1
where Field1 = 'Whatever you are looking for'
This should be pretty fast even for a big tables, especially if you have an index on Field1. You do this for every table with search fields to collect all the records that have relevant records you are searching.
Then you also need to be sure to insert any records into your temporary tables that might have foreign key references to any of your other temporary tables. So let's say you also built a table #table2 with the above method that has a foreign key to table1 called PrimaryKeyTable1. You would insert those records like:
Insert into #table1
(PrimaryKeyTable1, Field1, Field2, Field3, etc...)
select table1.PrimaryKeyTable1, table1.Field1, table1.Field2, table1.Field3, etc...
from table1
join #table2
on table1.PrimaryKeyTable1 = table2.PrimaryKeyTable1
where table1.PrimaryKeyTable1 not in
(Select PrimaryKeyTable1 from #table1)
Now you will also have any records in #table1 that match to a record in #table2 that contain records that match the search criteria. You do this for all your temporary tables that have relevant foreign keys. The order that you do the inserts matters; be sure that you don't reference any temporary tables until after the last insert statement while collecting the foreign key referenced records.
Then you can simply do your final select statement, replacing the actual tables with the temporary tables you have built and eliminating all the filters that search your field data. Depending on the structure of your query there might be other optimizations, but that is the general idea.
If you've already explored all of your indexing options and this still doesn't help, MS SQL Server has "Change Tracking" features that maybe be of use to you in building your cache table. You enable the database for change tracking and configure which tables you wish to track. SQL Server then creates change records on every update, insert, delete on a table and then lets you query for changes to records that have been made since the last time you checked. This is very useful for syncing changes and is more efficient than using triggers. It's also easier to manage than making your own tracking tables. This has been a feature since SQL Server 2005.
How to: Use SQL Server Change Tracking
Change tracking only captures the primary keys of the tables and let's you query which fields might have been modified. Then you can query the tables join on those keys to get the current data. If you want it to capture the data also you can use Change Capture, but it requires more overhead and at least SQL Server 2008 enterprise edition.
Change Data Capture
Your solution is a robust way of doing what is called in Microsoft SQL Server "an indexed view" or "materialized view" in Oracle.
Basically you are correct - it's faster to navigate single indexed table then a dozen ones which are updated constantly.
You should really try creating an indexed view (some start here https://technet.microsoft.com/en-us/library/dd171921(v=sql.100).aspx) and it will probably solve all your performance issues.
You can use schema binding View and create cluster index on view.it will store your view data physically.but after creating schema binding view you can not alter your table.
I have this table in MYSQL databse which has about 10 million records/rows. I want to insert a new column in the table. However a simple insert column query doesn't seem to work well for me.
This is what I have tried,
ALTER TABLE contacts ADD processed INT(11);
I waited for about 5 hours, but nothing happened. Is there any way to insert a new column in such a huge table?
Hope I am clear with my question. Any help would be appreciated.
If it's production:
You should use pt-online-schema-change of Percona Toolkit.
pt-online-schema-change emulates the way that MySQL alters tables internally, but it works on a copy of the table you wish to alter. This means that the original table is not locked, and clients may continue to read and change data in it.
pt-online-schema-change works by creating an empty copy of the table to alter, modifying it as desired, and then copying rows from the original table into the new table. When the copy is complete, it moves away the original table and replaces it with the new one. By default, it also drops the original table.
Or oak-online-alter-table which is part of openark kit
oak-online-alter-table allows for non blocking ALTER TABLE operations, table rebuilds and creating a table's ghost.
Altering tables will be slower, but it doesn't lock tables.
If it's not production and downtime is okay, try this approach:
CREATE TABLE contacts_tmp LIKE contacts;
ALTER TABLE contacts_tmp ADD COLUMN ADD processed INT UNSIGNED NOT NULL;
INSERT INTO contacts_tmp (contact_table_fields) SELECT * FROM contacts;
RENAME TABLE contacts_tmp TO contacts, contacts TO contacts_old;
DROP TABLE contacts_old;
I have a large db that I am chopping into smaller databases based on time intervals. This will reduce query time dramatically. In a query can I copy a resultset from one database to another with an identical schema?
Basically a select followed by an update conducted in the same code block?
Thanks,
slothishtype
Copying data from one database into another should be almost as simple as #slotishtype describes except you'll need to qualify it with the OTHER database you want it replicated into.
create table OtherDatabase.Student Select * from FirstDatabase.student
However, as you mention about copying same schema, that is something else. If you want all your R/I rules, triggers, etc, you may have to dump the database schema from your first (where it has all the create tables, indexes, etc) and run in a new database. However, that might post an issue where you have auto-incrementing columns. You can't write to a read-only auto-increment column -- the database controls that. However, if such case existed, you would have to just make those columns as integer datatypes (or similar) and do a
insert into OtherDatabase.Student ( field1, field2, etc )
select field1, field2, etc from FirstDatabase.student
If it is not necessary to add it to a new database, this will do fine:
CREATE TABLE student1 SELECT * FROM student
EDIT: for the record: This will not coopy over indices etc.
This, however, will:
CREATE TABLE student_new LIKE student;
INSERT recipes_new SELECT * FROM student;
slotishtype
I have a table with 10+ million rows. I need to create an index on a single column, however, the index takes so long to create that I get locks against the table.
It may be important to note that the index is being created as part of a 'rake db:migrate' step... I'm not adverse to creating the index manually if that will work.
UPDATE: I suppose I should have mentioned that this a write often table.
MySQL NDBCLUSTER engine can create index online without locking the writes to the table. However, the most widely used InnoDB engine does not support this feature. Another free and open source DB Postgres supports 'create index concurrently'.
you can prevent the blockage with something like this (pseudo-code):
create table temp like my_table;
update logger to log in temp;
alter table my_table add index new_index;
insert into my_table select * from temp;
update logger to log in my_table;
drop table temp
Where logger would be whatever adds rows/updates to your table in regular use(ex.: php script). This will set up a temporary table to use while the other one updates.
Try to make sure that the index is created before the records are inserted. That way, the index will also be filled during the population of the table. Although that will take longer, at least it will be ready to go when the rake task is done.