Import UTF-8 file in VBA - ms-access

I'd like to import a pipe delimited file in a table but this file is in UTF-8 with dynamic structure.
I had tried with TranfertText and FSO but only ADODB.Stream seems to deal well with such an encoding however it only read the full text...
How can I read such a file line per line to add rows in an existing table ?
Thanks in advance.

You can read a line from an ADO Stream with its ReadText method.
strLine = objStream.ReadText -2 ' adReadLine
You may need to set your stream's LineSeparator property first.
After you read the line, you can split on the pipe character.
Split(strLine, "|")

Related

How can I get MaxScript to retrieve data from a .txt file for naming exported objects?

I need to put some code in a MaxScript that will take data from parts of a .txt (or maybe CSV) file and use it to name exported objects etc.
So far iv'e only been using listener to work out scripts and so this is beyond me right now.
Any help appreciated, thanks!
Here is a nice short example of opening and parsing a csv file:
https://forums.autodesk.com/t5/3ds-max-programming/need-maxscript-help-reading-values-from-a-csv/td-p/4823113
I would suggest to take a look at FileStream. You should be able to open and read your file using it :)
https://knowledge.autodesk.com/support/3ds-max/learn-explore/caas/CloudHelp/cloudhelp/2019/ENU/3DSMax-MAXScript/files/GUID-BB041082-3EEF-4576-9D69-5B258A59065E-htm.html
So I have got this far:
adata = (dotnetClass "System.IO.File").ReadAllLines "Job_Log.csv"
print adata
exportFile ((maxfilepath + "\Assets\" ) + "adata" + "_123") #noPrompt selectedOnly:true using:ExporterPlugin.classes[14]
It exports to the correct path and the second line prints the data/name in the .csv file ok, but I cant get that value to be the name it exports as.
It just comes out as "adata_123.obj" instead
Any Ideas?
This is because in your export line you have adata between double quotes which instead of using your variable use the string value "adata". Try using this snippet instead
assetPath = PathConfig.AppendPath maxfilepath "Assets"
fileName = (adata as string) + "_123"
fullPath = PathConfig.AppendPath assetPath fileName
exportFile fullPath #noPrompt selectedOnly:true using:ExporterPlugin.classes[14]
Note: The "as string" and the bracket might not be necessary on the second line if your variable is already a string.

how to convert dbf to csv?

How to convert a DBF to CSV?
I need, use this library but it gave error: http://pythonhosted.org/dbf
import dbf
dbf.export('crop1-fx')
print 'Done'
"C:\Users\User\Anaconda2\python.exe"
"C:/Users/User/Desktop/Python/23/dbf/insertValuesDBF.py" Traceback
(most recent call last): File
"C:/Users/User/Desktop/Python/23/dbf/insertValuesDBF.py", line 3, in
dbf.export('crop1-fx') File "C:\Users\User\Anaconda2\lib\site-packages\dbf\ver_2.py", line 7824,
in export
table = source_table(table_or_records[0]) File "C:\Users\User\Anaconda2\lib\site-packages\dbf\ver_2.py", line 7956,
in source_table
table = thingie._meta.table() AttributeError: 'str' object has no attribute '_meta'
Process finished with exit code 1
You almost had it:
import dbf
db = dbf.Table('crop1-fx')
dbf.export(db)
The above will create a crop1-fx.csv file; however, I'm not sure this will work with a 24-digit numeric field in the table.
To convert a .DBF file to .CSV, download dBASE III PLUS or any other dBASE
software available on NET. Please note I am referring to 16 bit platform on
DOS.
Once dBASE is downloaded, go to the DOT prompt and give the following commands:
Type
use <the dbf file in question without the extension .dbf>
You will see the name of the dbf file on the display bar
Then, type "copy to" <file name you want, limited to 8 characters>
"delimited"
Now the data in the dbf file is sent to a TEXT file with data in each field surrounded by " " (double inverted commas) and separated by , (comma)
Now this file can be used to export the data to any other DATABASE SYSTEM which has got provision to convert this .CSV or DELIMITED FILE to the new database.
If only comma-separated file without the " " marks are required a procedure
can be written in dBASE to achieve that also.

Mysql dump character escaping and CSV read

I am trying to dump out the contents of my mysql query into a csv and read it using some java based open source csv reader. Here are the problems that I face with that,
My data set is having around 50 fields. The data set contains few fields with text having line breaks. Hence to prevent breaking my CSV reader, I gave Fields optionally enclosed by "\"" so that line breaks will be wrapped inside double quotes. In this case, for other fields even if there are no line breaks, it wraps them inside double quotes.
Looks like by default the escape character while doing mysql dump is \ ( backslash) This causes line breaks to appear with \ at the end which breaks the csv parser. To remove this \ at the end, if I give Fields escaped by '' ( empty string), it causes my double quotes in the text not to be escaped, still breaking the csv read.
It would be great if I can skip the line break escaping, but still retain escaping double quotes to cause csv reader not to break.
Any suggestions what can I follow here?
Thanks,
Sriram
Try dumping your data into CSV using uniVocity-parsers. You can then read the result using the same library:
Try this for dumping the data out:
ResultSet resultSet = executeYourQuery();
// To dump the data of our ResultSet, we configure the output format:
CsvWriterSettings writerSettings = new CsvWriterSettings();
writerSettings.getFormat().setLineSeparator("\n");
writerSettings.setHeaderWritingEnabled(true); // if you want want the column names to be printed out.
// Then create a routines object:
CsvRoutines routines = new CsvRoutines(writerSettings);
// The write() method takes care of everything. Both resultSet and output are closed by the routine.
routines.write(resultSet, new File("/path/to/your.csv"), "UTF-8");
And this to read your file:
// creates a CSV parser
CsvParserSettings parserSettings = new CsvParserSettings();
parserSettings.getFormat().setLineSeparator("\n");
parserSettings.setHeaderExtractionEnabled(true); //extract headers from file
CsvParser parser = new CsvParser(parserSettings);
// call beginParsing to read records one by one, iterator-style. Note that there are many ways to read your file, check the documentation.
parser.beginParsing(new File("/path/to/your.csv"), "UTF-8);
String[] row;
while ((row = parser.parseNext()) != null) {
System.out.println(Arrays.toString(row));
}
Hope this helps.
Disclaimer: I'm the author of this library, it's open source and free (Apache V2.0 license)

Read CSV File By Line in ASP

I am trying to read a CSV file with VBScript and it is causing huge problems because it isn't recognizing a line break. The way I have right now is:
Set objFSO = CreateObject("Scripting.FileSystemObject")
Set objFile = objFSO.OpenTextFile(server.mappath("my_csv_file.csv"), ForReading)
Do Until objFile.AtEndOfStream
strLine = objFile.ReadLine
arrFields = Split(strLine, ",")
LOOP_STUFF_HERE
Loop
The CSV file has several lines but the problem is that it is reading the CSV file all as one long line and the last item of each line is being combined with the first item of the next line because there is no comma after the last line (it's being created by a client of mine in Excel and then sent to me). My solution is that I have to open it up in a text editor, manually add a comma to the end of the lines and then remove the line breaks. This won't work in the long run because we are setting up an automated system.
Basically, I need to be able to to Split the lines on a line break (I've tried Split(strLine, "\n" but that doesn't seem to work) and then once they are split by line break, then split them by comma. It'd be a multidimensional array in other words.
I can't find out how to get VBScript to recognize the line breaks though. Any ideas? Thanks for your help.
Reading a CSV with FileSystem Object?. don't reinvent the wheel, there are different and proven ways to do exactly what you want in ASP.
The most easy way it's using OLEDB Jet or OLEDB ACE driver to read the file
Basiclyyou need to create a OLEDB.Connection Object with a specific Connection String to the get all the data in the CSV as rows; later you can pass all the data as array using GetRows method or you can use the Recordset object directly.
Post related of using this functionality
ASP.NET (Ace) When reading a CSV file using a DataReader and the OLEDB Jet data provider, how can I control column data types?
ASP.NET (Ace) Microsoft.ACE.OLEDB.12.0 CSV ConnectionString
ASP-Classic (Jet) Reading csv file in classic asp. Problem: column values are truncated up to 300 characters
the Connection is almost the same for Jet or Ace driver (it's the important part).

prevent CRLF in CSV export data

I have an export functionality that reads data from DB (entire records) and writes them in a .txt file, one record on a row each field being separated by ';'. the problem i am facing is that some fields contain CRLFs in it and when i write them to the file it goes to the next line thus destroying the structure of the file.
The only solution is to replace the CRLFs with a custom value, and at import replace back with CRLF. but i don't like this solution because these files are huge and the replace operation decreases performance....
Do you have any other ideas?
thank you!
Yes, use a CSV generator that quotes string values. For example, Python's csv module.
For example (ripped and modified from the csv docs):
import csv
def write(filename):
spamWriter = csv.writer(open(filename, 'w'), quoting=csv.QUOTE_ALL)
spamWriter.writerow(['Spam'] * 5 + ['Baked Beans'])
spamWriter.writerow(['Spam', 'Lovely Spam', 'Wonderful Spam\nbar'])
def read(filename):
reader = csv.reader(open(filename, "rb"))
for row in reader:
print row
write('eggs.csv')
read('eggs.csv')
Outputs:
['Spam', 'Spam', 'Spam', 'Spam', 'Spam', 'Baked Beans']
['Spam', 'Lovely Spam', 'Wonderful Spam\r\nbar']
If you have control over how the file is exported and imported, then you might want to consider using XML .. also you can use double quotes i believe to indicate literals like "," in the values.