MS Access column name with Portuguese accents - ms-access

I have a lot of databases I would like to change their column names. These databases were designed by a team which used Portuguese words for column names. I have managed to change names with spaces but when I try to change the names for columns with Portuguese accents e.g Instalaçao, my VBScript fails with error item not found in this collection. My VBScript is for changing this column is as below.
tblName = "CONSUMIDORES"
oldColName = "[Instalaçao]"
newColName = "INSTALACAO"
Set dbe = CreateObject("DAO.DBEngine.120")
Set db = dbe.OpenDatabase(dbPath)
Set fld = db.TableDefs(tblName).Fields(oldColName)
fld.Name = newColName
This code works for other columns with spaces but for accented words it fails. I am using MS Access 2013. I am new to VBScript.

Converting the file to ANSI as suggested by Gord Thompson worked.

I'd try to get away with refering to the fields by number:
Set fld = db.TableDefs(tblName).Fields(14)
(assuming Instalaçao is the 15th field of that table).

Related

Access query recordset result output to word shows special characters

Problem Description:
I am using Microsoft Access Plus 2010, with below code to export the result of query to Word table. However, there are all kinds of special characters exported if the record is over 255 characters.
Below are the query, VBA etc.
Query name: Qa
Query function: select field from Ta
VBA:
Dim qbf As QueryDef
Dim dabase As Database
Set dabase = CurrentDb
Set qdf = dbase.QueryDefs(Qa)
Dim results As Recordset
Dim flds As String
Set results = qdf.OpenRecordset()
While (Not results.EOF)
doc.addRecord results
results.MoveNext
Wend
qdf.Close
Public Sub addRecord(pubRecordSet As Recordset)
flds = pubRecordSet.Fields("fieldname")
mTable.cell(1, 1).range.InsertAfter (flds)
...
End Sub
Where 'mTable' is a Word table object, 'fieldname' is the name of the field to be exported to Word Table.
This VBA in general works fine when the length of flds is less than 255, however, it throws a lot of special characters in the Table cell if the length exceeds 255.
Example on special characters exported to Word table cell:
退D瞻껙皿 Ƭ" " ᬈ௩Hȷ⫗ 鋨D૝૝ィ௨瞻껥皿௲Ǭ" "Tೕ ŮԱ ࿨ซ鐌D
I checked the limitation of MS Access from link here. It mentions the recordset of query is 1GB, which my data is way less (~255 characters). Any help is appreciated.
I think they are being truncated or corrupted almost certainly to do with one of the reasons listed here : http://allenbrowne.com/ser-63.html
By definition if they are over 255 character long they will be interpreted as Memo or Long text (Same definition - Memo is the older name of the data type).

Loop through all fields and change data type

Not sure what I'm missing. I'm trying to loop through all fields in a table and if any data type is <> text, then change it to text. We have some fields coming in as a number but need to convert to text before exporting as a .txt. I cannot add quotations to the field - each field must only contain numbers and letters.
I have tried the following and used the db.execute alter line for each field, but I am receiving a run time 3047 Record Too large syntax error after about the 6th field. I'm assuming it would be best to somehow loop to check each field and only change to text if it isn't already?
Dim table As DAO.TableDef
Dim db As DAO.Database
Set db = CurrentDb
db.Execute "ALTER TABLE ImportFromExcel " _
& "ALTER COLUMN RPT_SPLIT_ID CHAR;"
The easiest method is to cast the field as a string in your SELECT statement. Use the CStr() function to cast whatever non-text field you have to text. When possible, always try to collect your data with a healthy SQL query rather than looping through datasets. It may not be a noticeable difference in smaller tables, but it makes a huge difference with large record sets. Try this:
SELECT CStr(field1), CStr(field2), CStr(field3)
FROM ImportFromExcel

Determining Encoding in MS Access Database

I have a MS Access database with a column that has some strange encoding. Oddly, I am unable to copy/paste this into anything (Chrome, Word, etc), because it strips out most of the unicode characters (though not all of them). What I am wondering, is there a way to determine what type of encoding is being used here?
Somehow the program I am using is taking this column and decoding it to readable text. I converted the Access database to PostgreSQL on a Linux system, but I'm pretty sure whatever encoding is being used here did not map correctly into the PostgreSQL database. What I'm trying to do is to convert this to hex, but I cannot do it since I'm unable to copy/paste the characters out of the database.
You can open the table as a recordset. Then loop the records and convert the field to hex using a function like this:
Public Function StrToByte(ByVal strChars As String) As String
Dim abytChar() As Byte
Dim lngChar As Long
Dim strByte As String
abytChar() = StrConv(strChars, vbFromUnicode)
strByte = Space(2 * (1 + UBound(abytChar) - LBound(abytChar)))
For lngChar = LBound(abytChar) To UBound(abytChar)
Mid(strByte, 1 + 2 * lngChar) = Hex(abytChar(lngChar))
Next
StrToByte = strByte
End Function
Or create a query:
Select *, StrToByte([EncryptedFieldName]) As HexField
From tblYourTable

dealing with strange characters with access, asp and CSVs

I have a problem, i have to create a csv file with an ASP Classic page, taking the data from a MS Access database, all really simple, but in the final file I have tons of strange characters, appearing as squares (unknown character square). I must get rid of those characters, but i really don't know how... have you got some ideas?
this is how I see something on the file: M�NSTERSTRA�E and of course, I don't really know which are the char that give problems...and they are really a lot.
and this is how I write the csv...
dim fs,f,d
set fs = Server.CreateObject("Scripting.FileSystemObject")
set f = fs.OpenTextFile(Server.MapPath("clienti.csv"), 2, true,true)
d = ""
do while not rs1.EOF
d = ""
For Each fField in RS1.Fields
f.Write(d)
f.Write(" ")
temp = RS1(fField.Name)
if len(trim(temp)) > 0 then
f.Write(trim(temp))
end if
d = ";"
Next
f.WriteLine("")
rs1.movenext
loop
f.Close
set f = Nothing
set fs = Nothing
I can't think about making a replace of all the chars, becouse I don't know them before i extract all the clients... I need some workaround for this...
The � means that your browser doesn't recognize that char, so makes a substitute. One example is the "smart quotes" (curly ones) that some applications, like MS Word, substitute for the strait quotes. The default character encoding is ISO-8859-1.
If you don't want those to show up, you have 2 choices. You can delete them, of you can try to find the appropriate substitution.
Either way, first you have to identify all the chars that result in �. To do this, you'll have to go through each char and compare it to this list: http://www.ic.unicamp.br/~stolfi/EXPORT/www/ISO-8859-1-Encoding.html
Once you identify the bad char, you have the choice of just deleting it, or once you figure out what they should be, you can change them to what they should be. For instance, the smart quotes are coded as 147 & 148, so you can just change both of those to strait quotes ("). If you do a search, you'll probably find some code that does most, if not all, of this for you.

classic asp - reading csv file having alphanumeric data

I'm having trouble in reading csv file that are having alphanumeric data. below is my code in classic asp:
strConn = "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" & _
ls_map_path & ";Extended Properties=""Text;HDR=Yes;FMT=Delimited"";"
Set lo_conn = Server.CreateObject("ADODB.Connection")
lo_conn.Open strConn
Set lo_rs = Server.CreateObject("ADODB.recordset")
lo_rs.CursorLocation = adUseClient
lo_rs.open "SELECT * FROM " & as_file_name, lo_conn, adOpenStatic, adLockOptimistic, adCmdText
and below is the data:
user_id,status,last_name,first_name,middle_name
1234,1,DeVera,athan,M.
1234,1,De Vera,athan,M.
ABC1,1,Santos,Shaine
abcd,1,Santos,Luis
1234,1,De Vera,athan,M.
1234,1,De Vera,athan,M.
ABC1,1,Santos,Shaine
When reading "user_id" column using lo_rs.fields.Item("user_id"), it perfectly retrieve the "1234" user_id value. but other data that are having alphanumeric value is returning me a null.
I don't know the reason why it is returning null. Though, if the data is all alphanumeric then it perfectly reads the user_id column. I think the only problem is, if the csv data is having a mix numeric and alphanumeric value in one column.
Does anyone know how to resolve this? or maybe I just have a missing text in the connection string.
Please advise and thank you very much for the help in advance!
To get around the type inference you can create a SCHEMA.INI that defines the types of each column in the CSV file.
Set HDR=NO & in the directory containing the CSV (ls_map_path) create a schema.ini:
[filenamehere.csv]
Col1=user_id Text
Col2=status Long
Col3=last_name Text
Col4=middle_name Text
The type mappings used by the text provider are now based on the above schema.