Access Import CSV file - csv

I'm trying to import a CSV file into Access. However the way the CSV file is formatted, it creates three lines for the same transaction. Is it possible to tell Access that every three lines belong to the same transaction when creating or appending the table? VBA is fine, if that's the only way possible.
Currently during the import for one transaction, Access creates 8 fields for the first line, 29 fields for the second line, and 8 more for the third line. I would either prefer having the 48 fields for the one transaction or telling access I only need certain fields. For example, I need only field 2 of the first line, field 11 of the second line and field3 of the third line for each transaction.
Thanks in advance for your help.

Not aware of any way of doing that within access. However CSV is a very specific format and the way yours is laid out is incorrect according to CSV standards. It should be a single record to a line.
Use an advanced text editor such as Sublime or Notepad++ to edit the CSV file such that each newline character actually represents the end of a record.

Related

Import csv file into an ms access table- Data ignored [duplicate]

Access is truncating the data in a couple Memo fields when I am appending an Excel file. The field in the Access table is already set as a Memo type. I believe the problem is that I do not have any entries in the first few rows of some of the memo fields. Access is assuming the data is a text field, even though I have already set it as a Memo type.
I have tried appending as a CSV. Did not work.
I have put dummy data in the first row that exceeds the 255 character limit and the data is not truncated if I do that.
I do not want to have to put dummy data in every time I have to import an Excel file. This is a process that will be completed at least biweekly, maybe more frequent. I would like to set up an easy way to import the data for future employees that work with the same database. Any ideas?
Update: Even with dummy data in the first couple of rows, Access is truncating the data for 3 out of the 10 Memo feilds when I import the Excel file (Character length of dummy data is 785). Now I am really at a loss for ideas.
It has been a while, but I was having the same issues as you.
After much digging, I found that the wonderful world of microsoft explains:
To avoid errors during importing, ensure that each source column
contains the same type of data in every row. Access scans the first
eight source rows to determine the data type of the fields in the
table. We highly recommend that you ensure that the first eight source
rows do not mix values of different data types in any of the columns.
Otherwise, Access might not assign the correct data type to the
column.
Apparently, this means when appending an excel file to an existing table, even when columns are formatted and saved as memo fields, that if all 8 of the first rows in the excel file are less than 256 chars, Access assumes you actually meant to specify text, thus truncating the remaining rows after 255 chars. I have performed several tests placing "dummy" rows within the top 8 rows, and each triggered the import of more than 255 chars.
Now, if you import to a new table, the wizard allows you to pick all of the formatting options.
Importing to a new table is convenient if you are okay with overwriting all of the data already in the table. However, if you truly need to append, I would suggest importing to a temporary table, then appending from there. An easy way to do this is to save a import then execute it from VBA, like Elliot_et_al wanted to do. You could then also run the append query in VBA as well. If you set up your tables correctly you may be able to get away with
INSERT INTO [MyTable]
SELECT [MyTable_temp].*
FROM [MyTable_temp];
For what it's worth....I ran into a similar problem with Access 2013 - it was truncating fields to 255 characters on import from XLS, even when the Import Wizard selected LONG TEXT as the field, and even when I had fields with > 255 characters in the first few rows.
A colleague suggested that I link the spreadsheet instead importing to a new table, the issue went away. I also created a new table based on the linked one, and all is good.
EDITED TO ADD: In Access 2013, if you've already imported the XLS file into Access and cannot go back to it to try to link first, try this instead:
Go to Design View of the table, go to Field Properties at the bottom of that screen and set the Long Text "Text Format" to "Rich Text". Just today, I found that this saved me from having to recreate a table that I'd imported from excel months ago and found that even though I had the "Notes" column set to Long Text, it was still truncating text that I was manually entering in to 255 characters regardless. Switching to Rich Text made this text visible.
I use excel to communicate with external partners and capture reports from them into an access database. I've found the best way to do this is to insert a "dummy" first row into the worksheet that contains greater than 255 characters in any given column where the user-populated data is likely to exceed 255 characters.
In this way when I import the data it always imports all the text, and then I can simply delete the "dummy" row from the database table.
I frequently use an "import template" workbook that I link to from my access database. I set the template page to be formatted as a table before linking (so that the import contains all data without having the specify the range each time), and make the first "dummy" row hidden in there.
In this way I can simply copy and paste the data into the import template and then run a database query to import (and if necessary, transform) the data into the database, with a second query to delete the "dummy" record afterwards.
Hope this helps..?
Excel and Access are quirky. Apparently, appending Excel or CSVs to the end of an existing Access table which has the same properties of Long Text is an issue. Appending data will default all Long Text to Short Text. The work around was to output the data to Excel, append the data into one table, then import it as a new table in Access. Access has a problem with treating appending data as Short Text instead of Long Text regardless what you do.
Do make sure that when using the import wizard to change the properties of the column to Long Text.
I hope this helps.
I faced the same issue in MS Access 2013. When I import an excel sheet with one of the column text greater than 255 characters, it was truncating. I did lot of research and finally I am able to find a workaround. Actually, Some how MS Access database determining the size of the text based on the first record column text length and fixing that length for the subsequent records. If it's length < 255, access automatically limiting further records length to 255 size or what ever is the first record column length. I ensured the first record to have max length of all the records text column (sorted) and then imported and it worked well for me.
I had same exact problem with Access 2010. I found two different workarounds after finding out Access look at first 25 records to determine type of data on each columns when importing.
Sorted importing records by length of column in descending order. This means records with larger than 255 characters in some column will be among first 25 records. Then, Access was able to import those records without truncating.
Created link table specifying column data type as memo and then appended to table.
I've had luck in the past with Rich Text solution offered above as well as using "dummy rows" as the first record imported. Thank you for those! However, today I think I've come across a more efficient/consistent solution for imports you'll repeat many times. I tried this in Access 2007.
Use the import wizard as if you're importing the data to a new table. Go through all the screens setting your specifications. Most important, check or specify the data type for each field in the tedious Field Options / Data Type area (for my recent text file, this was the 3rd screen of the Import Text Wizard)--be sure to specify your Memo fields here. (Don't worry, you'll only have to do this once!)
When you arrive at the final "That's all the info the wizard needs..." screen, look for the "Advanced..." button on the lower left. This brings up a screen summarizing everything you just did. Look for "Save as..." on the right. Save these specs with a helpful name. (You can confirm you saved your specs by clicking "Specs..." directly below.) Click "Okay" to leave the advanced screen.
You can now cancel out of the wizard if you don't actually need to create a new table. Next--and this is what you can do every time from now on to avoid truncations--go to the normal import wizard with "Append a copy of the records to the table..." In the wizard, you should see that same "Advanced..." button. Open it, click "Specs...", and double-click your saved specification. Say "OK" to exit "Advanced," and complete the wizard. This should tell Access to keep your memo fields as memo fields!
When importing CSVs to existing tables, I find I need to go through a couple of the normal wizard screens (e.g. specify the Text Qualifier) before going to the "Advanced" screen. Not sure why this makes it happy, just FYI.
I hope this helps someone else who has struggled with Field Truncation import errors like me!
In many case you just change text format of memo field from normal text to RTF, now if you open table data you can see all imported text

Verifying and Formatting Text before importing it to MySql

I have an issue importing some CSV/TXT files.
Here at the company we receive files from other sources (companies). Some of these files sometimes come partially broken.
For example, a file containing 6 columns (id, name, city, state, zipCode, phone) and 2 million lines. The first 10.000 lines of that file are OK. But in the middle of the file instead of 6 columns, it has 5 or even 7 columns.
It seems like somebody "merged" several files into this one and did not pay attention to the number of columns. So when I import it to my MySql database table, the data comes very messy due to the columns being broken. The zipCode records show up on the field state and so on.
I was wondering how to scan such file before importing it to my DB, something like counting the ";" delimiters of each line. Would it be done using Regex or what would be the best option for that?
My program is written in Lazarus/Pascal.
I would read the file line by line and check the columns.
If a line respects the expected columns (count, , copy it in another file (input_OK.csv).
If it doesn't dump it in a broken lines file (input_KO.csv).
Study input_KO.csv errors, correct them then import the corrected file into the database.
IMO, a regex will take long here.

phpmyadmin export CSV to excel drops data

I'm using MySQL with XAMPP and using phpmyadmin to extract data from tables. If I choose to export to CSV, the data appears to be fine. But when I select to export using the "CSV for MS Excel" option, I'm losing some data in the export file. Settings are the same in both cases.
Specifically, if a field has a comma in it, it appears that at least sometimes the data after the comma is dropped. Note that the comma is contained in quotes with other text in the standard CSV format, so the comma should not be seen as a field delimiter. The data after the comma within the field is dropped, but in addition, data in fields that follow the field with the comma are also dropped, but not necessarily for the entire record.
So, let's say record 2 has a comma in a text-based field in column C, such as "big spender, nice guy." What goes into Column C in Excel is "big spender" with ", nice guy" being dropped. In addition, Columns D, E, F and G may also lose their data. But in some cases it appears that later columns (perhaps H, I, J and K) may have the correct data in them. I'm not suggesting it always loses data for any specific number of columns, just that some number seem to lose data but at times later columns start having data again in the correct column.
I can't see a clear pattern to what gets dropped and what doesn't, just that what I describe above happened yesterday in a data set I'm using. Note I can see the complete data in the SQL table, and if I use the straight CSV export, it appears that no data is lost.
Could this be a bug? I've searched for known bugs and found none. FYI, I'm using Excel in Office 2007 on a Windows 7 machine. The original data source is SugarCRM.
Thanks so much.
Open up the CSV file phpmyadmin made for you with a text editor, not with Excel. Find the offending row (the one with big spender, nice guy in it). Look to see whether it looks like this
"whatever","whatever","big spender, nice guy", 123, 456
or
whatever,whatever,big spender, nice guy, 123, 456
If it's the second one your columns aren't delimited properly. CSV is deceptively hard to get right because of this, and because of the possibility of this kind of text string:
Joe said, "O'Meara is a big spender and a nice guy!"
You may wish to try exporting your data in a tab-delimited rather than comma-delimited file to overcome this. You can do this by specifying ordinary, not Excel-style, CSV. Then specify
\t
where it ask you for "Columns separated with:".
Excel will be able to figure this out as it reads it.

process csv File with multiple tables in SSIS

i trying to figure out if its possible to pre-process a CSV file in SSIS before importing the Data into SQL.
I currently receive a file that contains 8 tables with different structures in one flat file.
the Tables are identified by a row with the Table name in it encapsulated by Square Brackets i.e. [DOL_PROD]
the the data is underneath in standard CSV format. Headers first and then the data.
the tables are split by a blank line and the process repeats for the next 7 tables.
[DOL_CONSUME]
TP Ref,Item Code,Description,Qty,Serial,Consume_Ref
12345,abc,xxxxxxxxx,4,123456789,abc
[DOL_ENGPD]
TP Ref,EquipLoc,BackClyLoc,EngineerCom,Changed,NewName
is it posible to split it out into seperate CSV files? or Process it in a loop?
i would really like to be able to perform this all with SSIS automatically.
Kind Regards,
Adam
You can't do that by flat file source and connection manager alone.
There are two ways to achieve your goal:
You can use Script Component as source of the rows and to process the files, and then you'd do whatever you want with a file programatically.
The other way, is to read your flat file treating every row as a single column (i.e. without specifying delimiter), and then, via Data Flow Transformations, you'd be splitting rows, recognizing table names, splitting flows and so on.
I'd strongly advise you to use Script Component, even if you'd have to learn .NET first, because the second option will be a nightmare :). I'd use Flat File Source to extract lines from file as single column, and thet work it in Script Component, rather than reading a "raw" file directly.
Here's a resource that should get you started: http://furrukhbaig.wordpress.com/2012/02/28/processing-large-poorly-formatted-text-file-with-ssis-9/

ACCESS: Truncation error when appending CSV data to tables?

I am currently experiencing difficulties when trying to append data to existing tables.
I have about 100 CSV files that I would like to create a single table from; all the tables have different column structures but this isn't really an issue as the associated field names are in the first row of each file.
First, I create a new table from one of the files indicating that my field names are in the first row. I change the particular fields that have more than 256 characters to memo fields and import the data.
I then add to the table the fields that are missing.
Now, when I try to append more data, I again select that my field names are in the first row, but now I receive a truncation error for data that is destined for the memo fields.
Why is this error occurring? Is there a workaround for this?
edit
Here is an update regarding what I've attempted to solve the problem:
Importing and appending tables will not work unless they have the exact same structure. Moreover, you cannot create a Master table with all fields and properties set, then append all tables to the master. You still receive truncation errors.
I took CodeSlave's advice and attempted to upload the table, set the fields that I needed to be Memo fields, and then append the table. This worked, but again, the memo fields are not necessarily in the same order in every data file, and I have 1200 data files to import into 24 tables. Importing the data table by table is just NOT an option for this many tables.
I expect what you are experiencing is a mismatch between the source file (CSV) and the destination table (MS Access).
MS Access will make some guesses about what the field types are in you CSV file when you are doing the import. However, it's not perfect. Maybe it's seeing a string as a memo or float as a real. It's impossible for me to know without seeing the data.
What I would normally do, is:
Import the second CSV into it's own (temporary) table
Clean up the second table
Then use an SQL query to append those records from the second table to the first table.
Delete the second table
(repeat for each CSV file you are loading).
If I knew ahead of time that every CSV file was already identical in structure, I'd be inclined to instead concatenate them all together into one, and only have to do the import/clean-up once.
Had a very similar problem - trying to import a CSV file with large text fields (>255 chars) into an existing table. Declared the fields as memo but were still being truncated.
Solution: start an import to link a table and then click on the Advanced button. Create a link specification which defines the relevant fields as memo fields and then save the link specification. Then cancel the import. Do another import this time the one you want which appends to an existing table. Click on the Advanced button again and select the link specification you just created. Click on finish and the data should be imported correctly without truncation.
I was having this problem, but noticed it always happened to only the first row. So by inserting a blank row in the csv it would import perfectly, then you need to remove the blank row in the Access table.
Cheers,
Grae Hunter
Note: I'm using Office 2010