I have a tab-delimited text file and want to import it in MS Access using VBA code.
I have created an MS Access form and have used the DoCmd.TransferText method:
DoCmd.TransferText(TransferType, SpecificationName, TableName, FileName, HasFieldNames, HTMLTableName, CodePage)
It works well for CSV File. I'm not sure how to do it in case of tab-delimited text files.
Any suggestions?
Do a manual import, changing the delimiter to TAB, save the import spec, and then specify that import spec in your TransferText command.
Related
We have a .txt file with encoding UTF-16 LE (discussed here, as well). We need to load this file into an Azure SQL database. We are first trying to convert this file to a csv format by using Text Import Wizard of Data Excel 365 wizard. But if we use the ^|^,^|^ as a custom delimiter, the first and last columns still end up with ^|^ value.
Question: What may be possible solutions/work arounds for converting this type of file to csv?
Remarks: This is a huge file (1GB) with about 150 columns. Following is just a sample for explaining the scenario in this post.
Sample of the txt file:
^|^Col0^|^,^|^Col1^|^,^|^Col2^|^,^|^Col3^|^,^|^Col4^|^,^|^Col5^|^,^|^Col6^|^,^|^Col7^|^
^|^1234^|^,^|^4600869848^|^,^|^6000.00^|^,^|^2021-12-20 10:16:19.3600000^|^,^|^False^|^,^|^^|^,^|^^|^,^|^2^|^
^|^5431^|^,^|^3425143451^|^,^|^30000.00^|^,^|^2021-12-13 10:27:44.9030000^|^,^|^False^|^,^|^^|^,^|^^|^,^|^2^|^
.....................
............................
After using the delimiter ^|^,^|^ in Excel text import wizard
Instead of mentioning the ^|^,^|^ as custom delimiter, you can mention comma as a delimiter, that will give you a result like below:
Then you can record a macro to replace the desired characters which is ^|^ after importing is done as mentioned in below link:
Create A Macro Code To Achieve Find And Replace Text In Excel
I exported a MySQL table as CSV (Just CSV and not "CSV for MS Excel") and edited in excel. Then I opened it with Notepad++ in order to change character encoding to UTF-8. Now I want to import this Notepad++ file back to the MySQL table.
I don't know what next. I tried copying it to excel and save as CSV and import the CSV file, but it doesn't work.
I use shared hosting. I exported the table from phpMyAdmin.
Export->Format->CSV
Please help me.
925,1,2020-04-29 20:00:00,2020-04-29 20:00:00,,ck-X{´¯nsâ ]nXmhv,,publish,closed,closed,,ck-x{´¯nsâ-]nxmhv,,,2020-04-29 20:00:00,2020-04-29 20:00:00,,0,https://example.com/questions/ck-x{´¯nsâ-]nxmhv,0,lp_question,,0
926,1,2020-04-29 20:00:00,2020-04-29 20:00:00,,B[p-\nI ck-X{´¯nsâ ]nXmhv,,publish,closed,closed,,b[p-\ni-ck-x{´¯nsâ-]nxmhv,,,2020-04-29 20:00:00,2020-04-29 20:00:00,,0,https://example.com/questions/b[p-\ni-ck-x{´¯nsâ-]nxmhv,0,lp_question,,0
My schema.ini file is being ignored.I get the same results whether I have a scheme.ini file in the same folder as my tab file or not. All of the columns end up in a single column. I am trying to use a schema.ini as I am importing tab delimited files. The results make perfect sense if it is trying to import a comma delim file.
So my postulate is that the schema.ini file is just being ignored.
I am running Access from a .Net program using Microsoft Access 14.0 Object.Library.
I am using this command from .net:
Access.DoCmd.TransferText( Microsoft.Office.Interop.Access.AcTextTransferType.acImportDelim, , TableName, TabFile, HasFieldNames)
Here is my schema.ini file, not that it matters since it is being completely ignored:
[impacts.txt]
Format=TabDelimited
ColNameHeader=True
MaxScanRows=0
Clues? Thanks!
EDIT:
I tried running this from within an Access Module with the same results.
I tried editing the registry to change the Format value there. Same results.
Consider an action query, either append or make-table, as the use of schema.ini files can work directly in an Access query of a text file. Below assumes .ini file is in same directory as text file.
INSERT INTO mytableName
SELECT * FROM [text;Database=C:\Path\To\Text\File].[impacts.txt]
SELECT * INTO newtableName FROM [text;Database=C:\Path\To\Text\File].[impacts.txt]
The data I'm trying to import is here: http://archive.ics.uci.edu/ml/machine-learning-databases/car/
car.data 51 K
There are no missing values in this data, yet there are lots of "?"s in the rapidminer once I imported the data. I looked at to the source and those data, which shown as "?", do exist properly in the source. What may be the problem?
by the way, if I download that file it's extension is .data. How should I import that kind of files? I import it as if it is a .csv file and it looks ok at first but there are those "?"s.
It's been sometime since I have used Rapidminer, but AFAIK, you can import .data file by using csv Import Wizard and setting the file type to All Files.
Regarding the ? values, you would have the look at the settings while importing the file and may have to adjust the datatype in Step 4 of the Import Wizard (the dropdown menu)
Use the Read CSV operator to load the file.
In the "Data import wizard - Step 2 of 4 screen, find the Column Separation group box and select the radio button Comma ",". The default separator is the semicolon and car.csv is comma-separated.
In the next step-- "Data import wizard - Step 3 of 4"-- change the annotation for row 1 from Name to - (the dash character). This tells RapidMiner that the first row contains data and not column headers.
i have already saved a recent import in data tasks and now i am trying to call it automatically:
DoCmd.TransferText acImportDelim, "import1", "temp", "C:\Documents and Settings\agordon\Desktop\ACTIVITYEX.csv"
the error that i am getting is:
the text file specification "import1" does not exist
does anyone know what this error means?
It's expecting the parameter "Import1" to be a specification name.
If "import1" is not an actual file, then you can just leave this parameter empty and the file should import into table temp (assuming temp is has the same number of fields as your CSV)