Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 days ago.
Improve this question
I am trying to understand how it is working FFT. So understanding for background phenomenon FFT, I tried to make an example. I saw that, there is one formula about calculation frequency value using the complex numbers. This formula in the below.
Directly we calculate the this complex numbers index, sampling rate and total number of item in the fft list. After this information we can obtain frequency value.
But in this situation we ignore the all complex numbers. So why we do that I don't understand. Is there someone that will give me a clue about that ?
This video has lots of FFT outputs of complex numbers. But this guy directly ignore the complex numbers value and find index (k), as we already know sampling rate value and length of ffth result (N).
And after this calculations, obtain the frequency value. Is it normal ignore all of complex numbers value ? Or Am I missed something about this calculation.
This is my complex numbers value and I want to calculate frequency value using formula by hand. How can I do that ?
Thanks in advance for all comments
I want to tried fft calculations but ignoring the complex numbers give me a stuck
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 1 year ago.
Improve this question
I am working with parking occupancy prediction using machine learning random forest regression. I have 6 features, I have tried to implement the random forest model but the results are not good, As I am very new to this I do not know what kind of model is suitable for this kind of problem. My dataset is huge I have 47 million rows. I have also used Random search cv but I cannot improve the model. Kindly have a look at the code below and help to improve or suggest another model.
Random forest regression
The features used are extracted with the help of the location data of the parking lots with a buffer. Kindly help me to improve.
So, your used variables are :
['restaurants_pts','population','res_percent','com_percent','supermarkt_pts', 'bank_pts']
The thing I see is, for a same Parking, those variables won't change, so the Regression will just predict the "average" occupancy of the parking. One of the key part of your problem seem to be that the occupancy is not the same at 5pm and at 4am...
I'd suggest you work on a time variable (ex : arrival) so it's usable.
Itself, the variable cannot be understood by the model, but you can work on it to create categories with it. For example, you make a preprocess selecting only the HOUR of your variable, and then make categories with it (either each hour being a category, or larger categories like ['noon - 6am', '6am - 10am', '10am - 2pm', '2pm - 6 pm', '6 pm - noon'])
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
First of all, I should mention that I'm not an expert in signal processing, but I know some of the very basics. So I apologize if this question doesn't make any sense.
Basically I want to be able to run a spectral analysis over a specific set of user-defined discrete frequency bands. Ideally I would want to capture around 50-100 different bands simultaneously. For example: the frequencies of each key on an 80-key grand piano.
Also I should probably mention that I plan to run this in a CUDA environment with about 200 cores at my disposal (Jetson TK1).
My question is: What acquisition time, sample rate, sampling frequency, etc should I use to get a high enough resolution to line up with the desired results? I don't want to choose a crazy high number like 10000 samples, so are there any tricks to minimize the number of samples while getting spectral lines within the desired bands?
Thanks!
The FFT result does not depend on its initialization, only on the sample rate, length, and signal input. You don't need to use a whole FFT if you only want one frequency result. A bandpass filter (perhaps 1 per core) for each frequency band would allow customizing each filter for the bandwidth and response desired for that frequency.
Also, for music, note pitch is very often different from spectral frequency peak.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I was refactoring old code and encountered several IF conditions that were way too complex and long and I'm certain they can be simplified. My guess is that those conditions grew so much because of later modifications.
Anyway, I was wondering if any of you know of a good online simplifier I can use. I'm not interested in any specific language, just a simplifier that would take in for example:
((A OR B) AND (!B AND C) OR C)
And give me a simplified version of the expression, if any.
I've looked at the other similar questions but none point me to a good simplifier.
Thanks.
You can try Wolfram Alpha as in this example based on your input:
http://www.wolframalpha.com/input/?i=((A%20OR%20B)%20AND%20(NOT%20B%20AND%20C)%20OR%20C)&t=crmtb01&f=rc
Try Logic Friday 1 It includes tools from the Univerity of California (Espresso and misII) and makes them usable with a GUI. You can enter boolean equations and truth tables as desired. It also features a graphical gate diagram input and output.
The minimization can be carried out two-level or multi-level. The two-level form yields a minimized sum of products. The multi-level form creates a circuit composed out of logical gates. The types of gates can be restricted by the user.
Your expression simplifies to C.
I found that The Boolean Expression Reducer is much easier to use than Logic Friday. Plus it doesn't require installation and is multi-platform (Java).
Also in Logic Friday the expression A | B just returns 3 entries in truth table; I expected 4.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
So in my lift in my flat the buttons aren't (being in the UK) labelled: G, 1, 2, 3 etc. Nor in the American fashion of: 1,2,3,4 etc.
They're labelled: 0, 1, 2, 3 i.e. they're index from 0
I though to myself: 'Clearly, if you were to write a goToFloor like function to represent moving between floors, you could do so by the index of the element. Easy!'
And then I realised not all languages start their arrays from 0, some start from 1.
How is this decision made? Is it one of efficiency (I doubt it!)? Ease on new programmers (arguably, anyone who makes the mistake once, won't again)?
I can't see any reason a programming language would deviate from a standard, whether it be 0, 1 or any other number. With that in mind, perhaps it would help to know the first language that had the ability to index and then the first language to break whatever convention was set?
I hope this isn't too 'wishy-washy' a question for SO, I'm very eager to hear the history behind indexing.
When the first programming languages were designed it used to start at 0 because an array maps to memory positions. The array mapped to a memory position and the number was used as an offset to retrieve the adjacent values. According to this, the number should be seen as the distance from the start, not as the order in the array.
From a mathematical point of view it makes sense, because it helps to implement algorithms more naturally.
However 0 is not appealing to humans, because we start counting at 1. It's counter intuitive and this is why some languages decided to "fake" the start arrays at 1. (Note that some of them like VB allows you to choose between 0 and 1 based arrays.)
Interesting information on this topic could be found in this famous Dijkstra article:
Why numbering should start at zero
The first "language" would have been assembler. There an array is simply the memory address of the first element. To access one element in the array, an offset is added. So if the array is at position t0, then t0+0 is the first element, t0+1 is the second element etc. This leads to indexes starting at 0. Later, higher level languages added a better nicer syntax, but the indexes stayed the same way.
Sometimes however, exceptions were made. In Pascal for example, a String is an array of bytes. However the first byte of the array/string stores the length of the string, so that the first letter is stored at index 1. However index 0 still exists and can be used to get said length.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am working on a system that can create made up fanatsy words based on a variety of user input, such as syllable templates or a modified Backus Naur Form. One new mode, though, is planned to be machine learning. Here, the user does not explicitly define any rules, but paste some text and the system learns the structure of the given words and creates similar words.
My current naïve approach would be to create a table of letter neighborhood probabilities (including a special end-of-word "letter") and filling it by scanning the input by letter pairs (using whitespace and punctuation as word boundaries). Creating a word would mean to look up the probabilities for every letter to follow the current letter and randomly choose one according to the probabilities, append, and reiterate until end-of-word is encountered.
But I am looking for more sophisticated approaches that (probably?) provide better results. I do not know much about machine learning, so pointers to topics, techniques or algorithms are appreciated.
I think that for independent words (an especially names), a simple Markov chain system (which you seem to describe when talking about using letter pairs) can perform really well. Feed it a lexicon and throw it a seed to generate a new name based on what it learned. You may want to tweak the prefix length of the Markov chain to get nicely sounding results (as pointed out in a comment to your question, 2 letters are much better than one).
I once tried it with elvish and orcish names dictionaries and got very satisfying results.