My code reads in lines of a .csv file (comma separated file) and assigns each value to a local variable. After I have done some logic and stored the values in a table, I read in the next line of the .csv file and repeat the process. My question is what is the objRecordset.MoveNext doing compared to the loop. Here is my code, some of it has been removed to focus on the loop and objrecordset.MoveNext.
objconnection.Open 'connection string'
objRecordset.Open "SELECT * FROM [" & ThisFileName & "]", objconnection, adOpenStatic, adLockOptimistic, adCmdText 'select all text lines from the file
Do While Not objRecordset.EOF 'read lines until end of file
'Clear out all the local objects so prior values aren't left there
SampleName = ""
DateTimeAcquired = ""
Analyte = ""
Concentration = ""
Units = ""
'reads in each value according to column name and save to variable'
SampleName = objRecordset.Fields("Sample Name").Value
DateTimeAcquired = objRecordset.Fields("Date and Time Acquired").Value
Analyte = objRecordset.Fields("Element Full Name").Value
Concentration = objRecordset.Fields("Concentration").Value
Units = objRecordset.Fields("Units").Value
'Logic done on variables'
objRecordset.MoveNext
Loop
I am using Access 2010 VBA
objRecordset.MoveNext serves two purposes
As an Exit condition
Move to Next record
If you don't have objRecordset.MoveNext then the loop will continue infinitely since it doesn't reach objRecordset.EOF and rather stays on same record
Related
So this is a problem which started happening a day ago.
I have an Access database file which stores a form for creating jobs, updating job sector, and deleting it from the MySQL table.
There are two tables that are used for this form: a local one stored in Access called "Job Route" and another through MYSQL ODBC Driver, ANSI 5.3 version called "To Do". The local table stores user-submitted data containing information on all job areas and state, while the MYSQL table only shows one job area at a time.
When a new entry is created, the text box details from the Access form are being stored onto both tables. Where each job contains up to 4 different sectors (e.g. [start date], [area1], [person in charge 1], [description1], ... [area4], [person in charge 4], [description4]). Whenever the data is being updated to its next state, in the local table only the job counter field is incremented, while every field in the MYSQL table called "To Do" is updated to its next state fields.
Connection to the server is good, and everything was running fine until an issue popped up in the updating function.
Basically how that function works is that on a listbox control, all current job data is being queried from the "To Do" table. The user selects an entry, and hits a button which loads the next sector information data from "Job Route", onto various textbox controls. The user can change those textbox inputs if they want - the only thing that is changed when the function runs is the "To Do". The information in "Job Route" remains the same. When the user hits the update button, the next sector field data is updated to "To Do", while only a counter in "Job Route" is being incremented to signify the current sector.
My problem is this. For the most part almost everything is running fine, but for one of the fields in "To Do" table does not update with the values it should from the textbox. So for instance if the textbox control was set to "Wyntile", the field name should be set to that, but for some reason a different value shows up instead, example="Apples". Here is the code:
Private Sub moveJob2_Click()
'get the job number
JobNum = Text31
CurrArea = DLookup("[Area]", "[To_Do]", "[Job_Number] =""" & JobNum & """")
area1 = DLookup("[Area1]", "[Job Route]", "[Job Number] =""" & JobNum & """")
area2 = DLookup("[Area2]", "[Job Route]", "[Job Number] =""" & JobNum & """")
area3 = DLookup("[Area3]", "[Job Route]", "[Job Number] =""" & JobNum & """")
area4 = DLookup("[Area4]", "[Job Route]", "[Job Number] =""" & JobNum & """")
'get what the current area is
Current = DLookup("[Current]", "[Job Route]", "[Job Number] =""" & JobNum & """")
'if the current area is the first area then check to make sure there is a second
'if so, then set the new area to it
If Current = 1 Then
If area2 = "---" Then
MsgBox area1 + " was the last area in the route. The job cannot be moved."
Exit Sub
End If
newArea = area2
ElseIf Current = 2 Then
If area3 = "---" Then
MsgBox area2 + " was the last area in the route. The job cannot be moved."
Exit Sub
End If
newArea = area3
ElseIf Current = 3 Then
If area4 = "---" Then
MsgBox area3 + " was the last area in the route. The job cannot be moved."
Exit Sub
End If
newArea = area4
Else
MsgBox area4 + " was the last area in the route. The job cannot be moved."
Exit Sub
End If
'set up link to both the To_Do and Job Route tables
Dim dbJobNumbers As DAO.Database
Dim rstJob As DAO.Recordset
Dim jobRoute As DAO.Recordset
Set dbJobNumbers = CurrentDb
Set rstJob = dbJobNumbers.OpenRecordset("To_Do")
Set jobRoute = dbJobNumbers.OpenRecordset("Job Route")
' >> Edit the job in the To_Do table
****' ERROR: Out of all these, only [Person_In_Charge] is being set to something
****' completely different from Text33, which wasn't changed by the user.
rstJob.FindFirst "[Job_Number]=""" + Text31 + """"
rstJob.Edit
rstJob("[Area]").Value = newArea
rstJob("[Person_In_Charge]").Value = Text33
rstJob("[Equipment]").Value = Text37
rstJob("[Description]").Value = Text35
rstJob.Update
'update the current area for the Job Route
jobRoute.FindFirst "[Job Number]=""" + Text31 + """"
jobRoute.Edit
jobRoute("[Current]").Value = CInt(Current) + 1
jobRoute.Update
'success message
MsgBox Text31 + " has been moved from " + CurrArea + " to " + newArea + "."
'requery the listboxes
Dim selectParas As String
selectParas = "SELECT [a].[Job_Number] as [Job Number], [a].[Description], [a].[Person_In_Charge] as [Person in Charge], [a].[Area] " & _
" FROM [To_Do] As [a];"
listRemoveJobs.RowSource = selectParas
listRemoveJobs.Requery
listChangeJobArea.RowSource = selectParas
listChangeJobArea.Requery
End Sub
The function has been running fine, and even now when I test it again it runs as programmed. Though today I recieved the "ODBC Insert on 'To Do' has failed" error, but that's for a different function. So I was thinking that something is wrong in the ODBC connection/MySQL table, but when I checked the table in phpmyadmin for the most part that table follows a similar format of other mysql tables used in Access.
Also to note, the person who had told me this issue runs on an old Windows XP version, where once before on that computer there were known issues of the defined OBDC ANSI 5.3 Driver instance completely disappearing from Access' Data Source list before. (The driver is still installed on Windows). That time, apparently the driver instance later re-appeared again magically in the D.S. list when that computer was restarted. ... I know this is rather long, but I can't seem to find the cause of why this updating error in Access is happening. Is there a known issue of ODBC having stability issues in connection? Why is the value changed to something completely different on update? Any insight would be appreciated.
While there is no reproducible example to help with your specific situation, consider running pure SQL UPDATE queries with binded parameters. Your area conditional logic can be rewritten into nested IIF expression. Possibly, this will simplify your problem and streamline your needs without DLookup or multiple recordset updates. Also, your re-assignment to RowSource is not necessary. Below utilizes parameterization, a best practice when running SQL in application layer:
SQL (save both below as Access queries)
mySavedJoinUpdateQuery
PARAMETERS Text33Param Text(255), Text35Param Text(255)
Text37Param Text(255), JobNumberParam Text(255);
UPDATE [To_Do] d
INNER JOIN [Job Route] r
ON d.Job_Number = r.Job_Number
SET [Area] = IIF([Current] = 1 AND [Area2] != '---', [Area2],
IIF([Current] = 2 AND [Area3] != '---', [Area3],
IIF([Current] = 3 AND [Area4] != '---', [Area4], [Area1)
)
),
[Person_In_Charge] = Text33Param,
[Equipment] = Text37Param,
[Description] = Text35Param
WHERE r.[Job Number] = JobNumberParam;
mySavedSimpleUpdateQuery
PARAMETERS JobNumberParam Text(255);
UPDATE [Job Route] r
SET r.[Current] = r.[Current] + 1
WHERE r.[Job Number] = JobNumberParam;
VBA
Private Sub moveJob2_Click()
Dim qdef As QueryDef
Dim selectParas As String
' UPDATE JOIN QUERY
Set qdef = CurrentDb.QueryDefs("mySavedJoinUpdateQuery")
qdef!JobNumberParam = Text31
qdef!Text33Param = Text33
qdef!Text35Param = Text35
qdef!Text37Param = Text37
qdef.Execute dbFailOnError
Set qdef = Nothing
' UPDATE SIMPLE QUERY
Set qdef = CurrentDb.QueryDefs("mySavedSimpleUpdateQuery")
qdef!JobNumberParam = Text31
qdef.Execute dbFailOnError
Set qdef = Nothing
' REQUERY LIST BOXES
listRemoveJobs.Requery
listChangeJobArea.Requery
End Sub
I inherited an Access program that reads Excel files in a directory and brings them in to Access in a For/Each Ofile loop using DoCmd.TransferSpreadsheet acLink and Set rs1=db.openrecordset.
It then goes into a loop: Do While not rs1.EOF.
In the loop it immediately checks the first two fields (ID and ID2) for correct formatting.
The previous code would just cancel out of the entire loop if there were any errors in the formatting and skip to the top of the For/Each loop and read the next spreadsheet.
I would like to test those two fields but skip to the top of the Do While loop if there is no error. I can't seem to do this. So I want to skip the recordset that is in error but read the next one.
My code is below. I've skipped the For/Each loop since it just reads the next Excel file. The following code is within the For/Each loop.
set rs2 = db.openrecordset("tblIn_Access")
source="C:\Documents\Test_for_import\" & oFile.name
doCMD.TransferSpreadsheet acLink, 10, ofile.name, source, true
Set rs1=db.openrecordset("Select * from " & ofile.name & " ;")
Do While not rs1.eof
IDVal=rs1.fields(0).value
ID2Val=rs1.fields(1).value
if len(idVal) < 9 then
'Here is where I want to stop and read the next record in the spreadsheet.
' I don't want any more processing to occur
if left(id2Val) = "H" then
'This is another case when I want to stop and read the next record
' and not do any more processing on the record
'If these two errors do not occur then I want to continue with the next record
'There is a lot of data manipulation here to get fields in the right format.
' Once they are correct I add the values to a table in Access
rs2.AddNew
rs2.("MBR_ID")=idval
rs2.("ORD_ID")=id2val
rs2.("DATE_IN")=dateinval
rs2.("DATE_OUT")=dateoutval
rs2.Update
rs2.movenext
rs1.movenext
Loop
I can't get the processing to stop on the first two fields and go back and read the next record if they are not correctly formatted. The only thing I've been able to do is what the code originally did, stop and read in the next Excel sheet. There is a lot of manipulation in the code following the checks on ID & ID2 but I only want that code to run if the two fields are in the proper format.
This is time for the otherwise unused 'GOTO command.
set rs2 = db.openrecordset("tblIn_Access")
source="C:\Documents\Test_for_import\" & oFile.name
doCMD.TransferSpreadsheet acLink, 10, ofile.name, source, true
Set rs1=db.openrecordset("Select * from " & ofile.name & " ;")
Do While not rs1.eof
IDVal=rs1.fields(0).value
ID2Val=rs1.fields(1).value
if len(idVal) < 9 then GOTO SkipRecord
if left(id2Val) = "H" then GOTO SkipRecord
rs2.AddNew
rs2.("MBR_ID")=idval
rs2.("ORD_ID")=id2val
rs2.("DATE_IN")=dateinval
rs2.("DATE_OUT")=dateoutval
rs2.Update
SkipRecord:
rs2.movenext
rs1.movenext
Loop
I've been assigned the task of importing about 180 csv files into an access 2007 database. These files have been put together over the years and will be put into 1 of 3 folders. I have not set up any data checks or restrictions to these tables (such as primary keys, validation rules, or relationships). That will be done once the data has been imported. The data contained in these files are from a survey which has changed over the years. This change has caused the fields to change. The order of them has changed or sometimes a field is there and sometimes it is not. I do have a list of all the fields possible though and what table each csv file should be imported to, and know that all these fields can be text.
Here is my problem: Not knowing what the order of the columns or if a column will exist, is it possible to run a function to import these text files into their relative tables by mapping each column in the text file to it's associated column in the access table?
Each text file has headers which is useful to see shat they actually are, but there is no text qualifier which can be very annoying when dealing with id codes consisting entirely of numbers. Below is what I've tried so far. It gets the file location from a function elsewhere, adds each filename in that location to a collection, then for each file in that collection it tries to import it into it's relative field.
'Get file names from the folder and store them in a collection
temp = Dir(location & "\*.*")
Do While temp <> ""
fileNames.Add temp
temp = Dir
Loop
'Go through each file in the collection and preccess it as needed
For Each temp2 In fileNames
If (temp2 Like "trip*") Then 'Import trip files
'Gets the data from a query 'DoCmd.RunSQL "SELECT * FROM [Text;FMT=Delimited;HDR=YES;IMEX=2;CharacterSet=437;DATABASE=" & location & "].[" & temp2 & "] As csv;"
DoCmd.TransferText acImportDelim, "Trips_Import", "tbl_Trips", location & "\" & temp2, -1
End If
If (temp2 Like "catch*") Then 'Import catch files
DoCmd.TransferText acImportDelim, "Catch_Import", "tbl_Catch", location & "\" & temp2, -1
End If
If (temp2 Like "size*") Then 'Import size files
DoCmd.TransferText acImportDelim, "Size_Import", "tbl_Size", location & "\" & temp2, -1
End If
Next temp2
You can create a SELECT * query for each CSV file and open the query as a recordset. Open another recordset for the destination table.
Then for each row in the CSV recordset, add a row to the destination recordset, loop through the CSV Fields collection, and add each CSV field value to the destination field with the same name.
This approach is independent of the order in which the fields appear in the CSV file. It also doesn't matter if the CSV file includes only a subset of the fields present in the destination table. As long as each CSV field also exists in the table, it should work (assuming compatible data types, the value satisfies validation rules/constraints, etc.).
Dim db As DAO.Database
Dim fld As DAO.Field
Dim rsDest As DAO.Recordset
Dim rsSrc As DAO.Recordset
Dim strSelect As String
Dim strTableName As String
Set db = CurrentDb
'Go through each file in the collection and preccess it as needed
For Each temp2 In fileNames
Select Case Left(temp2, 4)
Case "trip"
strTableName = "tbl_Trips"
Case "catc"
strTableName = "tbl_Catch"
Case "size"
strTableName = "tbl_Size"
Case Else
' what should happen here?
' this will trigger an error at OpenRecordset(strTableName) ...
strTableName = vbNullString
' figure out a better alternative
End Select
strSelect = "SELECT csv.* FROM " & _
"[Text;FMT=Delimited;HDR=YES;IMEX=2;CharacterSet=437;DATABASE=" & _
Location & "].[" & temp2 & "] As csv;"
Debug.Print strSelect
Set rsSrc = db.OpenRecordset(strSelect, dbOpenSnapshot)
Set rsDest = db.OpenRecordset(strTableName, dbOpenTable, dbAppendOnly)
With rsSrc
Do While Not .EOF
rsDest.AddNew
For Each fld In .Fields
rsDest.Fields(fld.Name).value = fld.value
Next
rsDest.Update
.MoveNext
Loop
.Close
End With
rsDest.Close
Next temp2
Note: This is a RBAR (row by agonizing row) approach, so the performance will be less than stellar. However, I presumed you will do this only once, so the performance hit will not be a deal-breaker. If you need a faster set-based approach instead, you can build and execute an "append query" for each CSV file. To do that, you would first need to get the CSV field names, and then build the appropriate INSERT INTO statement.
It has been asked to me to make a sort of "exporting" feature on a site in ASP Classic.
I thought it could be really simple to make, but I got some problems. I get the right numbers of row in the file, but I get always the same recordset row, something like the recordset isn't updating the values during the foreach loop.
Here my code:
dim fs,f
set fs = Server.CreateObject("Scripting.FileSystemObject")
set f = fs.OpenTextFile(Server.MapPath("thebackupfile.csv"), 8, true)
set RS = conn.execute("SELECT * FROM sometable")
For each fField in RS.Fields
row = row & " " & fField.Name & ";"
Next
row = left (row, len(row)-1)
f.WriteLine(row)
do while not RS.EOF
For each fField in RS.Fields
csv = csv &" "& RS(fField.Name) & ";"
Next
csv = left (csv, len(csv)-1)
f.WriteLine(csv)
RS.movenext
loop
f.Close
set f = Nothing
set fs = Nothing
I can't figure out why I get n times the first row with n equals of the number of rows in the recordset.
Any advice? Am I taking the wrong way to do that?
I do spot a bug, but those results seem strange to me, too. The bug I see is that you are not clearing your csv variable on each iteration. What I'd expect is you'd have n(log n) number of rows, because when writing each new row you would also repeat everything that went before.
Update: Reading this again, the only time you write a new line is at the record boundaries as part of the f.WriteLine() call, so what I expect is that you get the right number of rows, but each row will be longer and longer, with new stuff added to the end of the row. Scroll the file to the right, and I bet you'll see the items in each row.
This means my code below should fix up your entire issue.
To fix the bug with clearing your csv variable, rather than add code to reset it I would change things to avoid the buffer string variable entirely, and write everything directly to your stream. I also added code to flush the stream every so often. These changes are important in a web context. If you don't flush your buffer, your browser might not see any response from your web server for a long time, and that could result in a timeout error. Given that this is tagged asp-classic, I wonder that you write to a file at all, rather than the response stream.
dim fs,f,d
set fs = Server.CreateObject("Scripting.FileSystemObject")
set f = fs.OpenTextFile(Server.MapPath("thebackupfile.csv"), 8, true)
d = ""
set RS = conn.execute("SELECT * FROM sometable")
For each fField in RS.Fields
f.Write(d)
f.Write(fField.Name)
f.Write(" ")
d= ";"
Next
f.WriteLine("")
Dim i
i = 0
do while not RS.EOF
d = ""
For Each fField in RS.Fields
f.Write(d)
f.Write(" ")
f.Write(RS(fField.Name))
d = ";"
Next
f.WriteLine("")
RS.movenext
i = i + 1
If i Mod 50 = 0 Then
f.Flush
End If
Loop
f.Close
set f = Nothing
set fs = Nothing
You should first load the values of a record into an array, then join that array into a delimiter-separated line, then write that line to the output file:
set RS = conn.Execute("SELECT * FROM sometable")
fields = Array()
Redim fields(RS.Fields.Count-1)
For i = 0 To RS.Fields.Count-1
fields(i) = RS.Fields(i).Name
Next
f.WriteLine """" & Join(fields, """;""") & """"
Do Until RS.EOF
For i = 0 To RS.Fields.Count-1
fields(i) = RS.Fields(i).Value
Next
f.WriteLine """" & Join(fields, """;""") & """"
RS.MoveNext
Loop
However, even better than that would be having MySQL directly generate the output file:
SELECT *
FROM sometable
INTO OUTFILE 'C:\\path\\to\\thebackupfile.csv'
FIELDS TERMINATED BY ';'
ENCLOSED BY '"'
LINES TERMINATED BY '\n'
I have text file with row, which can have several row separators. How to divide this big row into several rows using VBA?.
I need to import this file. I do so (Sep is the field delimiter):
Open FName For Input Access Read As #1
While Not EOF(1)
Line Input #1, WholeLine
If Right(WholeLine, 1) <> Sep Then
WholeLine = WholeLine & Sep
End If
StrArray = Split(WholeLine, Sep)
Wend
or may be you can suggest me another way to import data from txt file with field and row separators?
Well you have a lot of things in your question that need clarification, but maybe this will help you in the right direction. I am going to assume you are appending this to a table.
You'll have to change the following things for your scenario:
1. ImportTableName
2. FieldName
3. FileName
4. sDelimiter - probably this too.
also don't forget to add some error handling. If you drop out without closing the file, it stays locked and will affect your attempts at retrying the import.
Const sDelimiter As String = " sep "
Dim rsImport As ADODB.Recordset
Dim vShortLine As Variant
Dim arrLongLine As Variant
Dim sLongLine As String
Set rsImport = New ADODB.Recordset
With rsImport
.Open ImportTableName, CurrentProject.Connection, adOpenStatic, adLockOptimistic
Open FileName For Input As #1 ' Open file.
While Not EOF(1)
Line Input #1, sLongLine
arrLongLine = Split(sLongLine, sDelimiter)
For Each vShortLine In arrLongLine
.AddNew
!FieldName = vShortLine
Next
Wend
Close #1 ' Close file.
.UpdateBatch
.Close
End With
Set rsImport = Nothing