LabVIEW - writing data from multiple DAQ Assistants in the same .csv-file - csv

I have the following problem with my VI, which I could not solve by myself or research:
When running the VI, the data should be stored in a .csv-File. In the pictures, you can see the block diagram. When running, it produces the following file:
Test Steady State
T_saug_1/T_saug_2/Unbelegt/Unbelegt/T_ND/T_HD/T_Wasser_ein/T_Wasser_aus/T_front/T_back/T-right/T-left
18,320 18,491 20,873 20,838 20,463 20,969 20,353 20,543 20,480 20,618
20,618 20,238
As you can see, the data gets stored only in the first column (in the preview of the post it looks like it is a row, but it is really a column; T steady state is the header). But these temperatures are not the temperatures of the first sensor, it somehow stored the value for every sensor in the respective row. When the first row was filled, it stopped storing data entirely. I did not figure out how I could insert a file here, otherwise I would have done so... I want to store the data for each sensor in the associated column.
Another problem I have: the waveform-chart, which shows all the temperatures, only updates every 4-6 seconds. Not only is the interval between every update not always the same, but from my understanding it should update every second since the while-loop has a wait-timer set to 1000ms. I don't know what my mistake here is...
Please let me know if you have any ideas on how to solve the problems I have or suggestions where I could find answers to my questions. I am very new to LabVIEW, I am sorry if this question is silly.
With best regards an thank you for the patient help,
lempy.
csv-file
Block diagram
DAQ-Assis. for PT100
DAQ-Ass. for TC

The Write Delimited Spreadsheet VI has two boolean inputs: Append to file? and transpose?
Append to file? is not set for the first write, which defaults to FALSE. That means, on each write, the file is overwritten. For the second and third call, it is set to TRUE, so those data is appended.
The most simple solution is to put the first two write functions outside the main loop. This overwrites the file at start of the VI with the headers, and values will be appended as desired.
transpose? will swap rows and columns. Wire TRUE to it, and check if it works.
About your second question:
A loop runs as fast as the slowest process inside. If the graph is updated every 6s only, something takes 6s to complete. My guess is that those temperature readings take so long...

Related

Create variable counting unique IDs in long format table

I would like to structure my long format SPSS file so I can clean it and get a better overview. However, I run into some problems.
Patients appear several times in the database (Column patientID). How can I make a new variable that contains only 1 patient ID preferable on the line with baseline data/first moment that questionnaires are completed?
I have consulted with my colleagues, but without concrete solutions/answers
This can be done using the lag function - after sorting the file:
sort cases by PatientID_Pseudo OpenInvulMomenten.
if $casenum=1 or ($casenum>1 and PatientID_Pseudo<>lag(PatientID_Pseudo)) newvar=PatientID_Pseudo.
exe.

Jitterbit: target CSV-file created with only header although "do not create emtpy files" is checked

In Jitterbit Dataloader 10.37 I want to create CSV-files from Salesforce data but only if the query returns data.
I checked "do not create empty files" on the target type local file but it is still creating a csv just with the header but with no data. I do not want files created with no data in it. It is not an option to not have the header at all in the files - I will need it when there is data from the query.
Any suggestions? What am I missing?
I've seen this happen in situations where the write operation is after a couple of other operations. In that instance a header is written in the first operation, then another header is written in a second operation. The first row is read as the header, the second row (another header) is read as data, and written out.
I always add in a condition where I check if one of the fields equals its name. Something like this, to just skip those rows.
<trans>
if(Id=="Id",
false;,
true;
);
</trans>
The best way to do this is to send your output to a variable array. Then check the variable to see if data is present. So set your target to a global variable. Then add a script after that target and do your validation. To test your script use DEBUGBREAK(); to test and look at your variable content. That way you can see what is going into it.
Then make your condition statement.
if( Length($varailbe)>1,RunOperation("operation:myexport"),"novalue"):

How to copy variable values within an SPSS file?

I have three seperate SPSS files with information about roughly 7500 hemicolectomy patients. One file contains the information about the hemicolectomies, the second one about other surgeries the patients have had during their lifetime and the last one contains information about their sick leaves during their lifetime.
I have merged (idnumber is the common variable) the files to a single SPSS document but i ran into a problem with filtering out the surgeries and sick leaves that have nothing to do with the hemicolectomy. I'm quite new to SPSS so the simplest way i could think of doing this is by somehow copying the hemicolectomy info to every case and then just using the date/time calculator to choose which sick leaves and surgeries to discard. Switching to wide format is unpractical due to the large number of unrelated surgeries and sick leaves: I'd have thousands of variables.
So basically I'd like to do the following:
IF idnumber = idnumber THEN variable1=variable1 AND variable2=variable2 etc
How would I go about doing this?
All help will be appreciated!
the IF command can only be used with one transformation:
IF [condition] [transformation].
Assuming both of your files are sorted by idnumber:
UPDATE file=[master_file_reference]
/file=[secondary_file_reference]
/BY idnumber.
EXECUTE.
The file reference can be made either by their dataset name, or by their full path.
More on the UPDATE command:
https://www.ibm.com/support/knowledgecenter/en/SSLVMB_24.0.0/spss/base/syn_update_examples.html
I cant comment yet, so Im sorry if I misunderstand the problem. I wouldve asked for clarification in the comments to the question... here goes...
So you have three sources of data which have dates (?) of hemicolectomies, one for each case; dates (?) of other surgeries, multiple for each case; and sickleaves even more for each case. Is that right?
I'd try solving the problem before matching all three file by matching the file that contains one observation per patient (presumably hemicolectomies) to the one with the second most observations (presumably other surgeries) per patient with the /table keyword:
MATCH FILES /FILE= 'surgeries.sav' /table = 'hemicolectomies.sav'
/by idnumber.
EXECUTE.
this will "fill up" the blank cells for each patient with the hemicolectomy data.
now use the datetime to check which surgeries "belong" to the hemicolectomies, thus reduce your data and match it to the sickleave data using the /table keyword again.
Seems like the easiest solution to me.

select from table where two parameters satisfy in mysql

I am totally clueless how to get around to get the following kinda result from the same table in MySQL.
Required Result:
The raw data as shown in below image.
Mc_id and op_id can be different. For example, if mc_id is 4 and op_id is 10 then it has to loop through each vouid and extract done_on_date, again it has to loop through for the same mc_id 4 and op_id 10 and extract done_on_date where done_on_date is after first extracted done_on_date. Here second extracted done_on_date, we refer to, as next_done_on_date, just to distinguish it differently. Accordingly continue till end of the table. I hope I am clear enough now.
The idea is basically to see when was particular operation_id carried out for the said machine having mc_id. First time operation done is refered to as done_on_date and when the same operation carried out for the same machine next time, we refer to as next_done_on_date but actually inside the database table it is done_on_date.
Though let me know if anything yet to be clarified

Syncing the timestamp with incoming data in Labview

In Labview, I'm trying to produce a .csv file with one column being the timestamp and the others being the data so each data point is timestamped. I have succeeded in doing that, but my timestamp and data aren't synced so the values don't always align. For example, sometimes it will only have a data point, but not a timestamp associated on the same line. Here is the section of the code that takes the waveform (data) and timestamp to spit out the spreadsheet file. Not shown is the time delay.
Thank you in advance!
at first sight one problem is that you don't seem to be timestamping everey single datapoint, but instead an array of points: Waveform Chart is converted to DBL array, that is converted to a string, and that string gets the timestamp. If the array has more than one point, likely only the first point will get the timestamp.
Apart from that, it also seems that if the record flag is on, each loop call will append to an array and you write the entire array to the csv (so in fact writing duplicates), is this correct?