In Labview, I'm trying to produce a .csv file with one column being the timestamp and the others being the data so each data point is timestamped. I have succeeded in doing that, but my timestamp and data aren't synced so the values don't always align. For example, sometimes it will only have a data point, but not a timestamp associated on the same line. Here is the section of the code that takes the waveform (data) and timestamp to spit out the spreadsheet file. Not shown is the time delay.
Thank you in advance!
at first sight one problem is that you don't seem to be timestamping everey single datapoint, but instead an array of points: Waveform Chart is converted to DBL array, that is converted to a string, and that string gets the timestamp. If the array has more than one point, likely only the first point will get the timestamp.
Apart from that, it also seems that if the record flag is on, each loop call will append to an array and you write the entire array to the csv (so in fact writing duplicates), is this correct?
Related
I have the following problem with my VI, which I could not solve by myself or research:
When running the VI, the data should be stored in a .csv-File. In the pictures, you can see the block diagram. When running, it produces the following file:
Test Steady State
T_saug_1/T_saug_2/Unbelegt/Unbelegt/T_ND/T_HD/T_Wasser_ein/T_Wasser_aus/T_front/T_back/T-right/T-left
18,320 18,491 20,873 20,838 20,463 20,969 20,353 20,543 20,480 20,618
20,618 20,238
As you can see, the data gets stored only in the first column (in the preview of the post it looks like it is a row, but it is really a column; T steady state is the header). But these temperatures are not the temperatures of the first sensor, it somehow stored the value for every sensor in the respective row. When the first row was filled, it stopped storing data entirely. I did not figure out how I could insert a file here, otherwise I would have done so... I want to store the data for each sensor in the associated column.
Another problem I have: the waveform-chart, which shows all the temperatures, only updates every 4-6 seconds. Not only is the interval between every update not always the same, but from my understanding it should update every second since the while-loop has a wait-timer set to 1000ms. I don't know what my mistake here is...
Please let me know if you have any ideas on how to solve the problems I have or suggestions where I could find answers to my questions. I am very new to LabVIEW, I am sorry if this question is silly.
With best regards an thank you for the patient help,
lempy.
csv-file
Block diagram
DAQ-Assis. for PT100
DAQ-Ass. for TC
The Write Delimited Spreadsheet VI has two boolean inputs: Append to file? and transpose?
Append to file? is not set for the first write, which defaults to FALSE. That means, on each write, the file is overwritten. For the second and third call, it is set to TRUE, so those data is appended.
The most simple solution is to put the first two write functions outside the main loop. This overwrites the file at start of the VI with the headers, and values will be appended as desired.
transpose? will swap rows and columns. Wire TRUE to it, and check if it works.
About your second question:
A loop runs as fast as the slowest process inside. If the graph is updated every 6s only, something takes 6s to complete. My guess is that those temperature readings take so long...
Can I populate a list with JSON data? I have a general list containing data available for several sessions but I need to filter them with my current session and insert them to another list. My idea is to use the filtered JSON data since I successfully filtered them in JSON format. I've looked into some threads that might relate but currently get nothing. Hope someone can point me to the right page.
I missed this page: or maybe I overlooked it: https://forum.alphasoftware.com/showthread.php?119524-How-to-populate-a-List-from-a-JSON-formatted-field.
Anyway, populating JSON data into list in alpha anywhere is easy to be done. Firstly, get the JSON data(in my case I produce them from another list). With this data(already in JSON format), I do the filter using:
var filtered_json = find_in_object(JSON.parse('my_JSON_data'), {my_filter_condition});
Then, the result should be in [object object][object object]
Finally, populate the result to the list.
var lObj= {dialog.object}.getControl('my_list_ID')
lObj.populate(filtered_json);
I know how to parse json cells in Open refine, but this one is too tricky for me.
I've used an API to extract the calendar of 4730 AirBNB's rooms, identified by their IDs.
Here is an example of one Json file : https://fr.airbnb.com/api/v2/calendar_months?key=d306zoyjsyarp7ifhu67rjxn52tv0t20¤cy=EUR&locale=fr&listing_id=4212133&month=11&year=2016&count=12&_format=with_conditions
For each ID and each day of the year from now until november 2017, i would like to extract the availability of this rooms (true or false) and its price at this day.
I can't figure out how to parse out these informations. I guess that it implies a series of nested forEach, but i can't find the right way to do this with Open Refine.
I've tried, of course,
forEach(value.parseJson().calendar_months, e, e.days)
The result is an array of arrays of dictionnaries that disrupts me.
Any help would be appreciate. If the operation is too difficult in Open Refine, a solution with R (or Python) would also be fine for me.
Rather than just creating your Project as text, and working with GREL to parse out...
The best way is just select the JSON record part that you want to work with using our visual importer wizard for JSON files and XML files (you can even use a URL pointing to a JSON file as in your example). (A video tutorial shows how here: https://www.youtube.com/watch?v=vUxdB-nl0Bw )
Select the JSON part that contains your records that you want to parse and work with (this can be any repeating part, just select one of them and OpenRefine will extract all the rest)
Limit the amount of data rows that you want to load in during creation, or leave default of all rows.
Click Create Project and now your in Rows mode. However if you think that Records mode might be better suited for context, just import the project again as JSON and then select the next outside area of the content, perhaps a larger array that contains a key field, etc. In the example, the key field would probably be the Date, and why I highlight the whole record for a given date. This way OpenRefine will have Keys for each record and Records mode lets you work with them better than Row mode.
Feel free to take this example and make it better and even more helpful for all , add it to our Wiki section on How to Use
I think you are on the right track. The output of:
forEach(value.parseJson().calendar_months, e, e.days)
is hard to read because OpenRefine and JSON both use square brackets to indicate arrays. What you are getting from this expression is an OR array containing twelve items (one for each month of the year). The items in the OR array are JSON - each one an array of days in the month.
To keep the steps manageable I'd suggest tackling it like this:
First use
forEach(value.parseJson().calendar_months,m,m.days).join("|")
You have to use 'join' because OR can't store OR arrays directly in a cell - it has to be a string.
Then use "Edit Cells->Split multi-valued cells" - this will get you 12 rows per ID, each containing a JSON expression. Now for each ID you have 12 rows in OR
Then use:
forEach(value.parseJson(),d,d).join("|")
This splits the JSON down into the individual days
Then use "Edit Cells->Split multi-valued cells" again to split the details for each day into its own cell.
Using the JSON from example URL above - this gives me 441 rows for the single ID - each contains the JSON describing the availability & price for a single day. At this point you can use the 'fill down' function on the ID column to fill in the ID for each of the rows.
You've now got some pretty easy JSON in each cell - so you can extract availability using
value.parseJson().available
etc.
I am adding rows in sequence in a script component. The sequence is such that I am parsing values from an input string then adding them to the output. This way all values from a particular input string are added before those from the next input string ( or so I assume).
Is this assumption incorrect?
I ask because I need to use the pivot transform (needs data sorted) after the script component and for performance reasons I would rather not add a sort between them.
So when I pivot on the original input string's identifier, will my pivot results be correct?
I needed a similar solution. Short answer is yes but this may depend on how you write your script task. Using the Input0Buffer buffer and calling .NextRow and then after what ever logic / processing you do, send the row to the one of your outputbuffers using AddRow. The operation becomes a synchronous row by row operation.
I need to store some daily data (for the past month) about each of my accounts in SFDC. The data is relatively small (an integer per day per account) and I was wondering if I could take the array for the month, turn it into JSON and store that in a text field on the account object or will I need to create a custom object to house this data?
You can certainly create a custom Text Area(Long) field on Account, assuming you plan to stay within the 32k limit of that field. Something to consider would be to make your JSON non-pretty by removing all the spaces first:
str = str.replace(' ', '');
Also this approach assumes that each write will completely overwrite your last JSON save. If you wanted to retain a history for these then perhaps a related (lookup) custom object is in order.
FYI - salesforce.stackexchange.com is in live beta now and it's a great place to get questions like these answered.