I have e extracted the elliptic Fourier descriptors for each otolith; but couldn't figure out how to normalize them with respect to the first harmonic and how to reconstruct mean shapes from them for each stations. I try myself, but couldn't get any results using Momocs pacage. Need expert helps in R script. Data in excel file
to use "first harmonic" normalization, just pass efourier() with default parameters (ie with norm=TRUE).
Have a look to Details section in ?efourier since this is usually not the best way to go (and I think it's very valid for otoliths)
feel free to contact me directly !
all the best
Related
I am new to deep learning and Semantic segmentation.
I have a dataset of medical images (CT) in Dicom format, in which I need to segment tumours and organs involved from the images. I have labelled organs contoured by our physician which we call it RT structure stored in Dicom format also.
As far as I know, people usually use "mask". Does it mean I need to convert all the contoured structure in the rt structure to mask? or I can use the information from the RT structure (.dcm) directly as my input?
Thanks for your help.
There is a special library called pydicom that you need to install before you can actually decode and later visualise the X-ray image.
Now, since you want to apply semantic segmentation and you want to segment the tumours, the solution to this is to create a neural network which accepts as input a pair of [image,mask], where, say, all the locations in the mask are 0 except for the zones where the tumour is, which are marked with 1; practically your ground truth is the mask.
Of course for this you will have to implement your CustomDataGenerator() which must yield at every step a batch of [image,mask] pairs as stated above.
I want to fit a curve to data obtained from an FFT. While working on this, I remembered that an FFT gives binned data, and therefore I wondered if I should treat this differently with curve-fitting.
If the bins are narrow compared to the structure, I think it should not be necessary to treat the data differently, but for me that is not the case.
I expect the right way to fit binned data is by minimizing not the difference between values of the bin and fit, but between bin area and the area beneath the fitted curve, for each bin, such that the energy in each bin matches the energy in the range of the bin as signified by the curve.
So my question is: am I thinking correctly about this? If not, how should I go about it?
Also, when looking around for information about this subject, I encountered the "Maximum log likelihood" for example, but did not find enough information about it to understand if and how it applied to my situation.
PS: I have no clue if this is the right site for this question, please let me know if there is a better place.
For an unwindowed FFT, the correct interpolation between bins is by using a Sinc (sin(x)/x) or periodic Sinc (Dirichlet) interpolation kernel. For an FFT of samples of a band-limited signal, thus will reconstruct the continuous spectrum.
A very simple and effective way of interpolating the spectrum (from an FFT) is to use zero-padding. It works both with and without windowing prior to the FFT.
Take your input vector of length N and extend it to length M*N, where M is an integer
Set all values beyond the original N values to zeros
Perform an FFT of length (N*M)
Calculate the magnitude of the ouput bins
What you get is the interpolated spectrum.
Best regards,
Jens
This can be done by using maximum log likelihood estimation. This is a method that finds the set of parameters that is most likely to have yielded the measured data - the technique originates in statistics.
I have finally found an understandable source for how to apply this to binned data. Sadly I cannot enter formulas here, so I refer to that source for a full explanation: slide 4 of this slide show.
EDIT:
For noisier signals this method did not seem to work very well. A method that was a bit more robust is a least squares fit, where the difference between the area is minimized, as suggested in the question.
I have not found any literature to defend this method, but it is similar to what happens in the maximum log likelihood estimation, and yields very similar results for noiseless test cases.
What I want to get is: the path which connect all the points in my graph, but without having to tell the algorithm where to start and where to finish.
It need to use the driving direction in google-maps api but without setting a start or end point.
It is not the TSP problem because I don't have a "start city" and I don't have to get back to the "start city" neither.
As expressed in this question: Find the shortest path in a graph which visits certain nodes,
I could just use permutation because I have a few nodes, but the problem is that I need to analyze several groups of this few nodes So I would like the function to be the less time consuming posible.
NOTE: Im not looking for a Minimum Spaning Tree as this one neither: https://math.stackexchange.com/questions/130863/connecting-all-points-on-a-plane-with-shortest-path-possible
I want a path which tell me you will save gas if you go first here, then overthere, then overthere, and finally there.
Question: is there any library which can help me with that? Or is it a know problem that has already an exact answer? How could I solve it?
It sounds like you want an all pairs shortest path algorithm. This is the class of shortest path algorithms that attempt to compute the shortest path (or the length of the shortest path) between every pair of vertices in the graph.
These is a well-known problem, and solutions exist. Here's some reading material that describes other possible algorithms. There might be implementations of Johnson's algorithm for your chosen language and development environment.
Keep in mind, this is an expensive problem, computationally speaking.
If I understand you correctly, you want 1 route to visit all the nodes, without a predefined start/end and you want that to be minimal. A possible solution could be to modify your graph a bit to allow a travelling salesman algorithm to get a complete tour.
You start with your graph and add 1 extra node E. You connect that node to all other nodes in your graph and set the cost of all those edges to a very high constant M. You then unleash a travelling salesman algorithm on that graph which will give you a path P starting at E, passing all nodes and returning to E. If you remove the 2 edges in P that connected E to the rest of your path you will have what you were looking for.
A quick intuitive proof that it is indeed what you were looking for: Suppose it's not the cheapest way to connect all nodes. Let's call the supposedly better path Q. Q and P both connect all nodes in your original graph. The end points of Q would be A and B. Both of these would be connected to node E with an edge of cost M. If you would add those 2 edges to Q, you would get a better TSP solution than P, which is not possible as P was the best.
As you are using google map, your particular instance of TSP might satisfy the Triangle inequality.
Are you really speaking of distances or travel time ?
In the case of distances:
try Googling: "triangle traveling salesman problem"
IMPORTANT: The result is a very good approximation of the best result with guaranteed uper bound, not always the best.
One way to go would be using (self-organized) kohonen networks.
Assume you have n cities on a map (works the same in any dimension).
Take a chain of n connected "neurons" and place it randomly on the map.
Then you do several iterations, one iteration contains:
choose any city. (e.g. go through them in a ordered fashion)
determine the "closest" neuron, call it x. (e.g. euclidian distance)
Move this x closer to the city (e.g. take the direction vector from the neuron to the city and multiply it with a learning rate 0
Move neighbors of this neuron also towards this city (but less than in 3., dependend of distance from the neighbors to the "current closest" neuron x)
One can choose various functions in step 2, 3 and 4.
Notice also that this might not give the globally shortest path since it depends on where the start chain is located and different other things. For this on may consider doing several runs with different starting conditions or (depending of the problem) one can help a bit with pre-knowlege.
I hope this helps to complete this question for further readers...
I am working on a project to create a generic equation solver... envision this to take the form of 25-30 equations that will be saved in a table- variable names along with the operators.
I would then call this table for solving any equation with a missing variable and it would move operators/ other pieces to the other side of the missing variable
e.g. 2x+ 3y=z and if x were missing variable. I would call equation with values for y and z and it would convert to solve for x=(z-3y)/2
equations could be linear, polynomial, binary(yes/no result)...
i am not sure if i can get any light-weight library available or whether this needs to built from scratch... any pointers or guidance will be appreciated
See Maxima.
I rather like it for my symbolic computation needs.
If such a general black-box algorithm could be made accurate, robust and stable, pigs could fly. Solutions can be nonexistent, multiple, parametrized, etc.
Even for linear equations it gets tricky to do it right.
Your best bet is some form of Newton algorithm, but generally you tailor it to your problem at hand.
EDIT: I didn't see you wanted something symbolic, rather than numerical. It's another bag of worms.
I just wrote a simple Unix command line utility that could be implemented a lot more efficiently. I can measure its performance by just running it on a number of inputs and measuring the time it takes. This will produce a set of pairs of numbers, s t, where s is the input size and t the processing time. In order to determine the performance characteristics of my utility, I need to fit a function through these data points. I can do this manually, but I prefer to be lazy and let a utility do it for me.
Does such a utility exist?
Its input is a sequence of pairs of numbers.
Its output is a formula that expresses how the second number depends as a function on the first, plus an error measure.
One step of the way is to have a utility that does this just for polynomials.
This has been discussed here but it didn't produce a ready-to-use solution.
The next step is to extend the utility to try non-polynomial terms: negative-degree polynomials (as in y = 1/x) and logarithmic terms (as in y = x log x) will need to be tried as well. One idea to cope with the non-polynomial terms is to just surround the polynomial fitting with x and y scale transformations. I don't know whether that will do. This question is related but not exactly the same.
As I said, I'm lazy: I'm not looking for ideas on how to to write this myself, I'm looking for a reliable result of a project that has already done it for me. Any suggestions?
I believe that SAS has this, RS/1 has this, I think that Mathematica has this, Execel and most spreadsheets have a primitive form of this and usually there are add-ons available for more advanced forms. There are lots of Lab analysis and Statistical analysis tools that have stuff like this.
RE., Command Line Tools:
SAS, RS/1 and Minitab were all command line tools 20 years ago when I used them. I bet at least one of them still has this capability.