What are the pro's and cons with using Memo Fields in Microsoft Access 2010 Accdb databases?
I'm altering an Access 2010 (accdb) database to convert 5 columns in a table of 45,000 records to be memo. This data will then be imported into SQL-Server as varchar(max).
I'm creating new fields for the data and then copying the data from the text(255) fields across as neither 'ALTER TABLE' in VBA nor altering the column through the Ms Access table view is working ("not enough disk space or memory").
This is making me feel very wary about using this many Memo fields. Eventually will eventually be 4 tables with 5 Memo fields each. Each table will have around 100+ fields in total with up to 400,000 records.
Should I just go back to the end user and tell them that they will have to use text(255) rather than multiple Memo fields as I suspect that the field has been defined as Memo 'just in case'.
I really don't think 5 memo fields is a lot, and Access will only claim disk space for the actual entered values. While it can hold 1GB of characters (2GB of storage), it may only display 65K characters or so. I would not be overly concerned about using memo fields in general, and I don't think this is your issue.
If the memo fields are already part of your base table, you would be using an UPDATE statement to move the data from the other fields into it. If you are exporting the tables to SQL, and then making the columns larger, the
ALTER TABLE MyTable ALTER COLUMN MyColumn nvarchar(MAX)
Should work without issue.
You may want to consider creating a single relational table with a table key, record key, and a single memo field, and then allow as many memo fields as necessary by referencing this table in a one-to-many fashion from the base table.
So you would have this type of data:
TABLE PK MEMO
---------------------------------
TableA 1 Note A
TableA 1 Note B
TableA 2 Note A
TableB 1 Note A
Where exactly are you getting an out of disk space error?
Related
I'm working on Access Microsoft 365. I have a table with 3000 entries, one of the fields is called "Artist" (includes the name of the artist). I've decided to turn this into a Lookup field, and make a table of the Artist names (so that they are all spelled correctly and the user can't misspell a name). Since the data is there already, how can I ask Access to use the data that's there and compare it to the Artist table (hoping that it's a match)? Everything I've tried deletes the Artist Name from all 3000 entries.
I advise not to build lookups in table. Just build combobox on form.
Do a Find Unmatched query to identify records in your data that do not have a match in Artists table. Access has a query wizard for that. Manually correct spelling for any bad records located.
You can save the artist name into your data but a better alternative may be to replace the name field in your data table with a number field for storing ArtistID. This will involve an UPDATE action SQL using a JOIN of the two tables on the name fields. Once the new foreign key field is populated, delete the unnecessary name field from your data table.
This is a problem that bothers me whenever there is a need to add a new field to a table. Here the table has got about 1.5 Billion records (partitioned and sharded so it is physically separated files). Now I need to add a nullable field which is varchar(1024), which is going to accept some JSON strings. It is possible that the field length has to be increased in future to accommodate longer strings.
Here are the arguments
All existing rows will have null values for this field. (fav. new table)
Only 5% of the newly inserted records will have value for this. (fav. new table )
Most of the current queries on the table will need to access this field. (fav. alter)
I'm not sure if query memory allocation has a role to play in this, based on where I store.
Now should I add to current table, or define another table with same primary keys to store this data.
Your comments would help a decision.
Well if your older records wont need to have that varchar field , you should put it in another table and while pulling data give a join with primary key of other
Its not a big deal you can simply add a column in that table and for that just set null for that new column.
I think that, regardless of the 3 situations you have posited, you should alter the existing table, rather than creating a new one.
My reasoning is as follows:
1) Your table is very large (1.5 billion rows). If you create a new table, you would replicate the PK for 1.5 billion rows in the new table.
This will cause the following problems:
a) Wastage of DB space.
b) Time-intensive. Populating a new table with 1.5 billion rows and updating their PKs is a non-trivial exercise.
c) Rollback-segment exhaustion. If the rollback segments have insufficient space during the insertion of the new rows, the insert will fail. This will increase the DB fragmentation.
On the other hand, all these problems are avoided by altering the table:
1) There is no space wastage.
2) The operation won't be time-consuming.
3) There is no risk of rollback segment failure or DB fragmentation.
So alter the table.
Both these approaches have merits and demerits. I think I found a compromise between these two options., which has benefits of both approaches
create a new table to hold the JSON string. This table has same primary key as first table. Say the first table is Customer, and second table is Customer_json_attributes
alter the current table(customer) to add a flag indicating the presence of value in the JSON field. say json_present_indicator char(1).
Application to set the json_present_indicator='Y' in the fist table if there is a value for the JSON field in the second table, if not set to 'N'
Select queries will have a left join having json_present_indicator = ‘Y’ as a join condition. This will be efficient join as the query will search the second table only when the indicator is ‘Y’. Remember only 5% of the records will have a value on the JSON field
I want to create database table using mysql for transport application. Here I want to add columns which are not fixed for every record. They are added dynamically for every record. For example, record 1 contains PoliceFees & StateBoundry whereas record 2 does not have these fields. record 3 might have some others fields and so on. So how to design table for such data??
Dynamic fields and MySQL (relational database)? I think no-SQL is a better solution to your problem.
But if that fields is all known you can create a table with all of them and set as nullable. So you only insert needed data.
I'm working on a project where I need to be able to create custom fields on employees. These fields would be things like First Name, Last Name etc.
I'm required to optimize this to work for 10,000 employees with 200 fields.
Right now I have an "employee" table, a "field" table and pivot table ("employee_field"). The pivot table stores the employee's data for each of the fields in the nullable column with the data type required for that field. It also contains the employee id and the field id.
I'm finding that joining these tables takes about 0.5 seconds to load 500 employees with 50 fields.
I'm about to try creating another table that keeps all of the joined data I need for the application. This would basically be a table that contains the employee id, field id, the field label, the formatted data, and the field type alias. This table would be kept up to date using database triggers.
Question: Am I following the best practice for doing this kind of join, and is there any way to optimize this for reading this data?
You have an entity-attribute-value data model. There is nothing per se wrong with such a model, but it seems like overkill for your purposes.
MySQL should be able to readily handle a table with 200 columns. My recommendation is to eschew the joins and just define the table that you need.
Now, your situation might be a bit more fluid. Perhaps new columns need to be added. In this scenario, new fields are fine . . . if adding them is infrequent and they apply to all employees.
If you frequently need to handle new fields, or different employees have different subsets of fields. If this is the case, then I would recommend a hybrid model. Put the dozens of common fields into a single table and then build a more flexible EAV model for new attributes.
After further testing I've come to the conclusion that it has something to do with my apps binding to SQL and not the SQL schema.
So the root of this problem may lie in poor database design, some of the way this is set up is inherited from older versions. I just couldn't figure out a better way to do this.
I have four tables linked by the same field: [OBJECTID]. Each table is linked to an Access Form that controls the data. It is important that these tables be separate as the data is georeferenced and needs to be mapped separately, however they inherit several fields from one another by default.
Most of the time, the tables are in a one-to-one-to-one-to-one relationship, however occasionally, there is only data for the first table, and occasionally, there is only data for the second, third and fourth form.
Right now, the [OBJECTID] field in the first table is set to datatype autonumber, so that all subsequent linked records in the other tables can inherit that number. For the cases where the record in Tbl1 are not entered via Form1, it is easy enough to just assign a number that does not conflict with any current number, but how do I avoid assigning a number that could conflict with some future [OBJECTID] generated by the autonumber field in Tbl1?
Sorry if that is confusing! Thanks in advance for helping me think this through....
If the design is correct, there should be a relationship with referential integrity between tbl1 and table 2/3/4. Since you mention that occasionally, there is only data for the second, third and fourth form that means we have no referential integrity here :-/.
I would identify the fields that are common to all 4 tables, and create a "main" table with those, meaning that the main table MUST be filled. Then you create a 1 to 0,1 relationship to the other 4 tables, with an outer join, their PK beeing then a Long Integer.
For the source of your forms 1 to 4, use an outer join between MainTable and T1/2/3/4. The "subtables" will then inherit the PK of the main table.
Hope I am not too obscure.