Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
First of all, I should mention that I'm not an expert in signal processing, but I know some of the very basics. So I apologize if this question doesn't make any sense.
Basically I want to be able to run a spectral analysis over a specific set of user-defined discrete frequency bands. Ideally I would want to capture around 50-100 different bands simultaneously. For example: the frequencies of each key on an 80-key grand piano.
Also I should probably mention that I plan to run this in a CUDA environment with about 200 cores at my disposal (Jetson TK1).
My question is: What acquisition time, sample rate, sampling frequency, etc should I use to get a high enough resolution to line up with the desired results? I don't want to choose a crazy high number like 10000 samples, so are there any tricks to minimize the number of samples while getting spectral lines within the desired bands?
Thanks!
The FFT result does not depend on its initialization, only on the sample rate, length, and signal input. You don't need to use a whole FFT if you only want one frequency result. A bandpass filter (perhaps 1 per core) for each frequency band would allow customizing each filter for the bandwidth and response desired for that frequency.
Also, for music, note pitch is very often different from spectral frequency peak.
Related
Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 days ago.
Improve this question
I am trying to understand how it is working FFT. So understanding for background phenomenon FFT, I tried to make an example. I saw that, there is one formula about calculation frequency value using the complex numbers. This formula in the below.
Directly we calculate the this complex numbers index, sampling rate and total number of item in the fft list. After this information we can obtain frequency value.
But in this situation we ignore the all complex numbers. So why we do that I don't understand. Is there someone that will give me a clue about that ?
This video has lots of FFT outputs of complex numbers. But this guy directly ignore the complex numbers value and find index (k), as we already know sampling rate value and length of ffth result (N).
And after this calculations, obtain the frequency value. Is it normal ignore all of complex numbers value ? Or Am I missed something about this calculation.
This is my complex numbers value and I want to calculate frequency value using formula by hand. How can I do that ?
Thanks in advance for all comments
I want to tried fft calculations but ignoring the complex numbers give me a stuck
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 1 year ago.
Improve this question
I am working with parking occupancy prediction using machine learning random forest regression. I have 6 features, I have tried to implement the random forest model but the results are not good, As I am very new to this I do not know what kind of model is suitable for this kind of problem. My dataset is huge I have 47 million rows. I have also used Random search cv but I cannot improve the model. Kindly have a look at the code below and help to improve or suggest another model.
Random forest regression
The features used are extracted with the help of the location data of the parking lots with a buffer. Kindly help me to improve.
So, your used variables are :
['restaurants_pts','population','res_percent','com_percent','supermarkt_pts', 'bank_pts']
The thing I see is, for a same Parking, those variables won't change, so the Regression will just predict the "average" occupancy of the parking. One of the key part of your problem seem to be that the occupancy is not the same at 5pm and at 4am...
I'd suggest you work on a time variable (ex : arrival) so it's usable.
Itself, the variable cannot be understood by the model, but you can work on it to create categories with it. For example, you make a preprocess selecting only the HOUR of your variable, and then make categories with it (either each hour being a category, or larger categories like ['noon - 6am', '6am - 10am', '10am - 2pm', '2pm - 6 pm', '6 pm - noon'])
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I want to ask you a question about number of neurons used in dense layers used in CNN.
As much as i seen generally 16,32,64,128,256,512,1024,2048 number of neuron are being used in Dense layer.
So is descending vs ascending order better before the output layer?
For example
model.add(Dense(2048,kernel_regularizer='l2' ,activation='relu'))
model.add(Dense(1024,kernel_regularizer='l2' ,activation='relu'))
model.add(Dense(512,kernel_regularizer='l2' ,activation='relu'))
model.add(Dense(128,kernel_regularizer='l2' ,activation='relu'))
or
model.add(Dense(128,kernel_regularizer='l2' ,activation='relu'))
model.add(Dense(512,kernel_regularizer='l2' ,activation='relu'))
model.add(Dense(1024,kernel_regularizer='l2' ,activation='relu'))
model.add(Dense(2048,kernel_regularizer='l2' ,activation='relu'))
Could please give an answer with explanation as well?
Thank you
TLDR:
You can use either of them really. but it depends on many creteria.
Semi Long Explanation:
You can use either of those, but they impose different implications.
Basically you want your number of neurons to increase as the size of your featuremap decreases, in order to retain nearly the same representational power. its also the case, when it comes to the developing more abstract features which I'll talk about shortly.
This is why you see in a lot of papers, they start with a small number at the start of the network and gradually increase it.
The intution behind this is that early layers deal with primitive concepts and thus having a large amount of neurons wouldn't really benifit after some point, but as you go deeper, the heierarchy of abstractions get richer and richer and you'd want to be able to
capture as much information as you can and create new /higher/richer abstaractions better. This is why you increase the neurons as you go deeper.
On the other hand, when you reach the end of the network, you'd want to choose the best features out of all the features you have so far developed, so you start to gradually decrease the number of neurons so hopefully you'll end up with the most important features that matters to your specific task.
Different architectural designs, have different implications and are based on different intutions about the task at hand. You need to choose the best strategy based on your needs.
There's no such rule of descending vs ascending. but mostly people follow descending, But try to keep greater number of neuron in your fc part than your last classification neurons
if you see VGG16 arch, the last layers are in this order: 4096 ,4096 ,1000.so here 1000 is the no. of classes in imagenet dataset.
In your case you can follow this:
model.add(Dense(2048,kernel_regularizer='l2' ,activation='relu'))
model.add(Dense(1024,kernel_regularizer='l2' ,activation='relu'))
model.add(Dense(512,kernel_regularizer='l2' ,activation='relu'))
model.add(Dense(128,kernel_regularizer='l2' ,activation='relu'))
model.add(Dense(number_classes ,activation='softmax'))
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I am working in a project where i need to calculate some avg values based on the users interaction on a site.
Now, the amount of records that needs to have their total avg calculated can range from a few to thousands.
My question is, at which threshold would it be wise to store the aggregated data in a seperate table and through a store procedure update that value everytime a new record is generated instead of just calculate it everytime it is neede?
Thanks in advance.
Dont do it, until you start having performance problems caused by the time it takes to aggregate your data.
Then do it.
If discovering this bottleneck in production is unacceptable, then run the system in a test environment that accurately matches your production environment and load in test data that accurately matches production data. If you hit a performance bottleneck in that environment that is caused by aggregation time, then do it.
You need to weigh the need of current data vs the need of quick data. If you absolutely need current data then you have to live with longer delays in your queries. If you absolutely need your data asap then you will have to deal with older data.
You can time your queries and time the insertion into a separate table and evaluate which seems to best fit your needs.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Is there an algorithm for how to perform reservoir sampling when the points in the data stream have associated weights?
The algorithm by Pavlos Efraimidis and Paul Spirakis solves exactly this problem. The original paper with complete proofs is published with the title "Weighted random sampling with a reservoir" in Information Processing Letters 2006, but you can find a simple summary here.
The algorithm works as follows. First observe that another way to solve the unweighted reservoir sampling is to assign to each element a random id R between 0 and 1 and incrementally (say with a heap) keep track of the top k ids. Now let's look at weighted version, and let's say the i-th element has weight w_i. Then, we modify the algorithm by choosing the id of the i-th element to be R^(1/w_i) where R is again uniformly distributed in (0,1).
Another article talking about this algorithm is this one by the Cloudera folks.
You can try the A-ES algorithm from this paper of S. Efraimidis. It's quite simple to code and very efficient.
Hope this helps,
Benoit