I am totally clueless how to get around to get the following kinda result from the same table in MySQL.
Required Result:
The raw data as shown in below image.
Mc_id and op_id can be different. For example, if mc_id is 4 and op_id is 10 then it has to loop through each vouid and extract done_on_date, again it has to loop through for the same mc_id 4 and op_id 10 and extract done_on_date where done_on_date is after first extracted done_on_date. Here second extracted done_on_date, we refer to, as next_done_on_date, just to distinguish it differently. Accordingly continue till end of the table. I hope I am clear enough now.
The idea is basically to see when was particular operation_id carried out for the said machine having mc_id. First time operation done is refered to as done_on_date and when the same operation carried out for the same machine next time, we refer to as next_done_on_date but actually inside the database table it is done_on_date.
Though let me know if anything yet to be clarified
Related
I have the following problem with my VI, which I could not solve by myself or research:
When running the VI, the data should be stored in a .csv-File. In the pictures, you can see the block diagram. When running, it produces the following file:
Test Steady State
T_saug_1/T_saug_2/Unbelegt/Unbelegt/T_ND/T_HD/T_Wasser_ein/T_Wasser_aus/T_front/T_back/T-right/T-left
18,320 18,491 20,873 20,838 20,463 20,969 20,353 20,543 20,480 20,618
20,618 20,238
As you can see, the data gets stored only in the first column (in the preview of the post it looks like it is a row, but it is really a column; T steady state is the header). But these temperatures are not the temperatures of the first sensor, it somehow stored the value for every sensor in the respective row. When the first row was filled, it stopped storing data entirely. I did not figure out how I could insert a file here, otherwise I would have done so... I want to store the data for each sensor in the associated column.
Another problem I have: the waveform-chart, which shows all the temperatures, only updates every 4-6 seconds. Not only is the interval between every update not always the same, but from my understanding it should update every second since the while-loop has a wait-timer set to 1000ms. I don't know what my mistake here is...
Please let me know if you have any ideas on how to solve the problems I have or suggestions where I could find answers to my questions. I am very new to LabVIEW, I am sorry if this question is silly.
With best regards an thank you for the patient help,
lempy.
csv-file
Block diagram
DAQ-Assis. for PT100
DAQ-Ass. for TC
The Write Delimited Spreadsheet VI has two boolean inputs: Append to file? and transpose?
Append to file? is not set for the first write, which defaults to FALSE. That means, on each write, the file is overwritten. For the second and third call, it is set to TRUE, so those data is appended.
The most simple solution is to put the first two write functions outside the main loop. This overwrites the file at start of the VI with the headers, and values will be appended as desired.
transpose? will swap rows and columns. Wire TRUE to it, and check if it works.
About your second question:
A loop runs as fast as the slowest process inside. If the graph is updated every 6s only, something takes 6s to complete. My guess is that those temperature readings take so long...
I have three seperate SPSS files with information about roughly 7500 hemicolectomy patients. One file contains the information about the hemicolectomies, the second one about other surgeries the patients have had during their lifetime and the last one contains information about their sick leaves during their lifetime.
I have merged (idnumber is the common variable) the files to a single SPSS document but i ran into a problem with filtering out the surgeries and sick leaves that have nothing to do with the hemicolectomy. I'm quite new to SPSS so the simplest way i could think of doing this is by somehow copying the hemicolectomy info to every case and then just using the date/time calculator to choose which sick leaves and surgeries to discard. Switching to wide format is unpractical due to the large number of unrelated surgeries and sick leaves: I'd have thousands of variables.
So basically I'd like to do the following:
IF idnumber = idnumber THEN variable1=variable1 AND variable2=variable2 etc
How would I go about doing this?
All help will be appreciated!
the IF command can only be used with one transformation:
IF [condition] [transformation].
Assuming both of your files are sorted by idnumber:
UPDATE file=[master_file_reference]
/file=[secondary_file_reference]
/BY idnumber.
EXECUTE.
The file reference can be made either by their dataset name, or by their full path.
More on the UPDATE command:
https://www.ibm.com/support/knowledgecenter/en/SSLVMB_24.0.0/spss/base/syn_update_examples.html
I cant comment yet, so Im sorry if I misunderstand the problem. I wouldve asked for clarification in the comments to the question... here goes...
So you have three sources of data which have dates (?) of hemicolectomies, one for each case; dates (?) of other surgeries, multiple for each case; and sickleaves even more for each case. Is that right?
I'd try solving the problem before matching all three file by matching the file that contains one observation per patient (presumably hemicolectomies) to the one with the second most observations (presumably other surgeries) per patient with the /table keyword:
MATCH FILES /FILE= 'surgeries.sav' /table = 'hemicolectomies.sav'
/by idnumber.
EXECUTE.
this will "fill up" the blank cells for each patient with the hemicolectomy data.
now use the datetime to check which surgeries "belong" to the hemicolectomies, thus reduce your data and match it to the sickleave data using the /table keyword again.
Seems like the easiest solution to me.
I've been doing some harmless operations, basically combining 2 ids to create a unique id, and as the ids are numbers, I decided to use math operations to combine then (rather then string concatenation). So, as my second id is alwas < 10000, what I did was id1*10000 + id2
Problem is, Tableau doesn't seem to know how to add those numbers. To illustrate better, I created Calculation 1 (that is Id1*10000), Calculation2 (that is Id2) and Calculation3 (that is Calculation1 + Calculation2).
Check the file. http://www.speedyshare.com/Wc5zP/Tableau-can-t-sum.twbx
Original datasource is a csv file, but it's extracted (to a tde).
One thing that might be happening is that Tableau has some limitation on the size of int it can store. Anyone knows how I can change this? (Int64 would do the trick, if that is actually the problem)
Here a snapshot
I'm using Gerrit REST API to query all changes whose status is "merged". My query is
https://android-review.googlesource.com/changes/?q=status:merged&n=2
where "n=2" limits the size of query results to 2. So I got a JSON object like:
Of course there are more results. According to the REST document:
If the n query parameter is supplied and additional changes exist that match the query beyond the end, the last change object has a _more_changes: true JSON field set. Callers can resume a query with the N query parameter, supplying the last change’s _sortkey field as the value.
So I add the query parameter N with the _sortkey of the last change 100309. The new query is:
https://android-review.googlesource.com/changes/?q=status:merged&n=2&N=002e4203000187d5
With this new query, I was hoping that I'll get another 2 new query results, since I provided the _sortkey as a cursor of my previous search results.
However, it's really weird that this new query returns exactly the same results as the previous query, instead of the next 2 results as I expected. It seems like providing "N=002e4203000187d5" has no effect at all.
Does anybody know why using _sortkey to resume my query doesn't work?
I chatted with one of the developers at Google, and he confirmed that _sortkey has been removed from the newer versions of Gerrit they are running at android-review and gerrit-review. The N= parameter is no longer valid. The documentation will be updated to reflect this.
The alternative is to use &S=x to skip x results, which I tested and works well.
sortkey is deprecated in Gerrit v2.9 -
see the (Gerrit) ReleaseNotes-2.9.txt, under REST API - Changes:
[[sortkey-deprecation]]
Results returned by the [query changes] endpoint are now paginated using offsets instead of sortkeys.
The sortkey and sortkey_prev parameters on the endpoint are deprecated.
The results are now paginated using the --limit (-n) option to limit the number of results, and the -S option to set the start point.
Queries with sortkeys are still supported against old index versions, to enable online reindexing while clients have an older JS version.
See also here -
PSA: Removing the "sortkey" field from the gerrit-on-borg query interface:
...
Our solution is to kill the sortkey field and its related search operators (sortkey_before, sortkey_after, and resume_sortkey).
There are two ways you can achieve similar functionality.
Add "&S=" to your query to skip a fixed number of results.
(Note that this redoes the search so new results may have jumped ahead and
you might process the same change twice.
This is true of the resume_sortkey implementation as well,
so your code should already be able to handle this.)
Use the before/after operators.
Instead of taking the sortkey field from the last returned change and
using it in a resume_sortkey operator, you take the updated field from
the last returned change and use it in a before operator.
(This has slightly different semantics than the sortkey field, which
uses the change number as a tiebreaker when changes have similar updated times.)
...
I'm writing a video game in javascript on a server that saves info in a mysql database and I am trying to make my first effect attached to simple healing potion item. To implement the effect I call a spells table using spell_id and it gets a field called effect containing the code to execute on my server. I use the eval() function to execute the code in the string. In order to optimize the game I want to run as few queries as possible. For this instance (and I think the answer will help me evaluate other similar effects) I want to update the 'player' table which contains a stat column like 'health' then I want it to add n which will be a decreasing number 15 then 250 ms later add 14 then 13 until that n=1 the net effect is a large jump in health then smaller and smaller accomplishing this is relatively easy if the player's health reaches his maximum allowed limit the effect will stop immediately...
but I'd like to do a single update statement for each increase rather than a select and an update every 250ms to check if health > max_health and make sure the player's health doesn't go above his max health. So to digress a bit I'd like a single update that given this data
player_id health max_health
========= ====== ==========
1 90 100
will add 15 to health unless (max_health-health) < 15... in this case it should only add 10.
An easier solution might be
if I could just return health and max health after each update I update it I don't mind doing a final
pseudo code
if health > max_health
update health set health = max health
So if anyone could explain how to return fields after an update that would help.
Or if anyone could show how to use logic within the update that would also help.
Also, If I didn't give enough information I'm sorry I'd be glad to provide more I just didn't want to make the question hard to understand.
update health
set health = least(max_health, health +<potion effect>)
where player_id = ...
EDIT
For your other question : normally, i think that update returns the number of affected rows. So if you try to update health when health is already = max_health, it should return 0.
I'd know how to do this in php, for example, but just said you where using javascript... so ?
http://dev.mysql.com/doc/refman/5.6/en/update.html
UPDATE returns the number of rows that were actually changed. The
mysql_info() C API function returns the number of rows that were
matched and updated and the number of warnings that occurred during
the UPDATE.
Use the ANSI standard CASE function, or the mysql only least function as in the other answer
UPDATE player
SET health = CASE WHEN health + [potion] > max_health
THEN max_health
ELSE health + [potion]
END CASE
WHERE player_id = [player_id]