Using VAE-GAN architecture on larger images - deep-learning

I'm using a VAE-GAN architecture that was originally used on low res images (mnist, faces) to train on audio spectrograms which are much higher res. Does anyone have recommendations for what to change in the architecture to make this work?
A few things I can think of -- increasing kernel size, number of layers/nodes. But it is already quite slow to train.
Any ideas appreciated!

Related

What is the efficient way to get compressed ML models in multiple sizes of one model for non ML expert?

I'm not an ML expert and know just a little background about it.
I know that there are techniques to reduce the size of neural networks like distillation and pruning. But I don't know how to efficiently perform those techniques.
Now I need to solve a quite practical problem. I'd like to ship FaceNet, the face recognition model, to mobile devices. There might be trade-offs between recognition accuracy and performance + size. I don't know which size of model would fit best my demands. I think I need to test models in many sizes and figure out which one is the best empirically. To do this, I should acquire models in many sizes that are obtained by compressing the pre-trained FaceNet on its website. For example, 30Mb version of FaceNet, 40Mb version of FaceNet, etc.
However, model compression is not free and costs money. I'm worried that I'd do something very stupid and expensive. What is the recommended way to do this? Does this problem have any common solution for not ML expert?
Network "compression" is not like a slider in which you can move anywhere from slow/accurate to fast/not-accurate.
From your starting model you can apply some techniques which may or may not reduce the size of the network and may or may not reduce also the accuracy. For example converting weights from float32 to float16 will for sure reduce the memory requirement of the network by half, but can also introduce a small reduction in accuracy and it's not supported on all devices.
My suggestion is: start with the base model and perform some tests with it. Understand how far you are from the target FPS and size of your application and then decide which approach makes more sense to reach the objective you have in mind.
Since you said you're not an ML expert I think this kind of task (at least to my knowledge) requires some amount of studying of the subject and experimentation and I would start with an open source solution like for example https://tvm.apache.org/.

How to choose the hyperparameters and strategy for neural network witg small dataset?

I'm currently doing semantic segmentation ,However I have really small dataset,
I only have around 700 images with data augmentation,for example,flipping could
make it 2100 images.
Not sure if it's quite enough for my task(semantic segmentation with four
classes).
I want to use batch normalization,and mini batch gradient descent
What's really make me scratch my head is that if the batch size is too small,
the batch normalization doesn't work well ,but with larger batch size,
it seems equivalent to full batch gradient descent
I wonder if there's something like standard ratio between #of samples and batch
size?
Let me first address the second part of your question "strategy for neural network with small dataset". You may want to take a pretrained network on a larger dataset, and fine tune that network using your smaller dataset. See, for example, this tutorial.
Second, you ask about the size of the batch. Indeed, the smaller batch will make the algorithm to wander around the optimum as in classical stochastic gradient descent, the sign of which is noisy fluctuations of your losses. Whereas with a larger batch size there is typically a more "smooth" trajectory towards optimum. In any case, I suggest that you use an algorithm with momentum such as Adam. That would aid the convergence of your training.
Heuristically, the batch size can be kept as large as your GPU memory can fit. If the amount of GPU memory is not sufficient, then the batch size is reduced.

In deep learning, can I change the weight of loss dynamically?

Call for experts in deep learning.
Hey, I am recently working on training images using tensorflow in python for tone mapping. To get the better result, I focused on using perceptual loss introduced from this paper by Justin Johnson.
In my implementation, I made the use of all 3 parts of loss: a feature loss that extracted from vgg16; a L2 pixel-level loss from the transferred image and the ground true image; and the total variation loss. I summed them up as the loss for back propagation.
From the function
yˆ=argminλcloss_content(y,yc)+λsloss_style(y,ys)+λTVloss_TV(y)
in the paper, we can see that there are 3 weights of the losses, the λ's, to balance them. The value of three λs are probably fixed throughout the training.
My question is that does it make sense if I dynamically change the λ's in every epoch(or several epochs) to adjust the importance of these losses?
For instance, the perceptual loss converges drastically in the first several epochs yet the pixel-level l2 loss converges fairly slow. So maybe the weight λs should be higher for the content loss, let's say 0.9, but lower for others. As the time passes, the pixel-level loss will be increasingly important to smooth up the image and to minimize the artifacts. So it might be better to adjust it higher a bit. Just like changing the learning rate according to the different epochs.
The postdoc supervises me straightly opposes my idea. He thought it is dynamically changing the training model and could cause the inconsistency of the training.
So, pro and cons, I need some ideas...
Thanks!
It's hard to answer this without knowing more about the data you're using, but in short, dynamic loss should not really have that much effect and may have opposite effect altogether.
If you are using Keras, you could simply run a hyperparameter tuner similar to the following in order to see if there is any effect (change the loss accordingly):
https://towardsdatascience.com/hyperparameter-optimization-with-keras-b82e6364ca53
I've only done this on smaller models (way too time consuming) but in essence, it's best to keep it constant and also avoid angering off your supervisor too :D
If you are running a different ML or DL library, there are optimizer for each, just Google them. It may be best to run these on a cluster and overnight, but they usually give you a good enough optimized version of your model.
Hope that helps and good luck!

Memory requirements for FFT on STM32F103C8

I have a limited system and would like to implement FFT within STM32F103C8 without any extra memory buffers.
So I want to know how many memories are needed to implement if I have 2592x1944x8bit size's one image?
Actually, I want to have a process such as
Origial image ---> FFT ---> Blur ---> IFFT ---> Modified image
What is the memory requirements for FFT on STM32F103C8 ?
So I want to know how many memories are needed to implement if I have 2592x1944x8bit size's one image?
Much more than you have. This isn't going to work out.
2592x1944 # 8bpp is roughly 5 MB. Your microcontroller has 20 KB of RAM, which isn't even enough to store eight lines of your image.

Web Audio Pitch Detection for Tuner

So I have been making a simple HTML5 tuner using the Web Audio API. I have it all set up to respond to the correct frequencies, the problem seems to be with getting the actual frequencies. Using the input, I create an array of the spectrum where I look for the highest value and use that frequency as the one to feed into the tuner. The problem is that when creating an analyser in Web Audio it can not become more specific than an FFT value of 2048. When using this if i play a 440hz note, the closest note in the array is something like 430hz and the next value seems to be higher than 440. Therefor the tuner will think I am playing these notes when infact the loudest frequency should be 440hz and not 430hz. Since this frequency does not exist in the analyser array I am trying to figure out a way around this or if I am missing something very obvious.
I am very new at this so any help would be very appreciated.
Thanks
There are a number of approaches to implementing pitch detection. This paper provides a review of them. Their conclusion is that using FFTs may not be the best way to go - however, it's unclear quite what their FFT-based algorithm actually did.
If you're simply tuning guitar strings to fixed frequencies, much simpler approaches exist. Building a fully chromatic tuner that does not know a-priori the frequency to expect is hard.
The FFT approach you're using is entirely possible (I've built a robust musical instrument tuner using this approach that is being used white-label by a number of 3rd parties). However you need a significant amount of post-processing of the FFT data.
To start, you solve the resolution problem using the Short Timer FFT (STFT) - or more precisely - a succession of them. The process is described nicely in this article.
If you intend building a tuner for guitar and bass guitar (and let's face it, everyone who asks the question here is), you'll need t least a 4092-point DFT with overlapping windows in order not to violate the nyquist rate on the bottom E1 string at ~41Hz.
You have a bunch of other algorithmic and usability hurdles to overcome. Not least, perceived pitch and the spectral peak aren't always the same. Taking the spectral peak from the STFT doesn't work reliably (this is also why the basic auto-correlation approach is also broken).