what is maximum polygons a 3d model created in 3ds max can have if it is to be loaded to sandy - sandy3d

I am writing a code to load a model created in 3ds max into sandy. Can any one please tell me what is the largest polygon count sandy can support. :)

Well technically it's unlimited, but you don't want to load too many, as that would seriously impact performance, and it would run extremely slow. Especially flash.
I saw that 3ds max 2008 had a limit of ~6 million.
The best way to find this out is by trial and error, if you really need to know.

Related

What is the efficient way to get compressed ML models in multiple sizes of one model for non ML expert?

I'm not an ML expert and know just a little background about it.
I know that there are techniques to reduce the size of neural networks like distillation and pruning. But I don't know how to efficiently perform those techniques.
Now I need to solve a quite practical problem. I'd like to ship FaceNet, the face recognition model, to mobile devices. There might be trade-offs between recognition accuracy and performance + size. I don't know which size of model would fit best my demands. I think I need to test models in many sizes and figure out which one is the best empirically. To do this, I should acquire models in many sizes that are obtained by compressing the pre-trained FaceNet on its website. For example, 30Mb version of FaceNet, 40Mb version of FaceNet, etc.
However, model compression is not free and costs money. I'm worried that I'd do something very stupid and expensive. What is the recommended way to do this? Does this problem have any common solution for not ML expert?
Network "compression" is not like a slider in which you can move anywhere from slow/accurate to fast/not-accurate.
From your starting model you can apply some techniques which may or may not reduce the size of the network and may or may not reduce also the accuracy. For example converting weights from float32 to float16 will for sure reduce the memory requirement of the network by half, but can also introduce a small reduction in accuracy and it's not supported on all devices.
My suggestion is: start with the base model and perform some tests with it. Understand how far you are from the target FPS and size of your application and then decide which approach makes more sense to reach the objective you have in mind.
Since you said you're not an ML expert I think this kind of task (at least to my knowledge) requires some amount of studying of the subject and experimentation and I would start with an open source solution like for example https://tvm.apache.org/.

In deep learning, can I change the weight of loss dynamically?

Call for experts in deep learning.
Hey, I am recently working on training images using tensorflow in python for tone mapping. To get the better result, I focused on using perceptual loss introduced from this paper by Justin Johnson.
In my implementation, I made the use of all 3 parts of loss: a feature loss that extracted from vgg16; a L2 pixel-level loss from the transferred image and the ground true image; and the total variation loss. I summed them up as the loss for back propagation.
From the function
yˆ=argminλcloss_content(y,yc)+λsloss_style(y,ys)+λTVloss_TV(y)
in the paper, we can see that there are 3 weights of the losses, the λ's, to balance them. The value of three λs are probably fixed throughout the training.
My question is that does it make sense if I dynamically change the λ's in every epoch(or several epochs) to adjust the importance of these losses?
For instance, the perceptual loss converges drastically in the first several epochs yet the pixel-level l2 loss converges fairly slow. So maybe the weight λs should be higher for the content loss, let's say 0.9, but lower for others. As the time passes, the pixel-level loss will be increasingly important to smooth up the image and to minimize the artifacts. So it might be better to adjust it higher a bit. Just like changing the learning rate according to the different epochs.
The postdoc supervises me straightly opposes my idea. He thought it is dynamically changing the training model and could cause the inconsistency of the training.
So, pro and cons, I need some ideas...
Thanks!
It's hard to answer this without knowing more about the data you're using, but in short, dynamic loss should not really have that much effect and may have opposite effect altogether.
If you are using Keras, you could simply run a hyperparameter tuner similar to the following in order to see if there is any effect (change the loss accordingly):
https://towardsdatascience.com/hyperparameter-optimization-with-keras-b82e6364ca53
I've only done this on smaller models (way too time consuming) but in essence, it's best to keep it constant and also avoid angering off your supervisor too :D
If you are running a different ML or DL library, there are optimizer for each, just Google them. It may be best to run these on a cluster and overnight, but they usually give you a good enough optimized version of your model.
Hope that helps and good luck!

Graphhopper - Travel Times Between All 30,000 Visible Zip Codes?

I'd like to calculate the matrix of travel times between US zipcodes. There are about 30k visible zipcodes, so this is 900 million calculations (or 450 million assuming travel time is the same in both directions).
I haven't used graphhopper before but it seems suited to the task. My question are:
What's the best way of doing it?
Will this overload the graphhopper servers?
How long will it take?
I can supply latitude and longitude for each pair of zip codes.
Thanks - Steve
I've not tested GraphHopper yet for these large amount of points, but it should be possible.
What's the best way of doing it?
It would be probably faster if you avoid the HTTP overhead and use the Java lib directly like in this example. Be sure to assign enough RAM as the matrix itself is already 2g if you only use a short value for the distance or time. See also this question.
Will this overload the graphhopper servers?
The API is not allowed to be used without an API key which you can grab here. Or set up your own GraphHopper server.
How long will it take?
Will take probably some days though.
Warning - enterprisy note: we provide support to setup your servers or for your usecase. And also we sell a matrix add-on which makes those calculations at least 10 times faster.

Web Audio Pitch Detection for Tuner

So I have been making a simple HTML5 tuner using the Web Audio API. I have it all set up to respond to the correct frequencies, the problem seems to be with getting the actual frequencies. Using the input, I create an array of the spectrum where I look for the highest value and use that frequency as the one to feed into the tuner. The problem is that when creating an analyser in Web Audio it can not become more specific than an FFT value of 2048. When using this if i play a 440hz note, the closest note in the array is something like 430hz and the next value seems to be higher than 440. Therefor the tuner will think I am playing these notes when infact the loudest frequency should be 440hz and not 430hz. Since this frequency does not exist in the analyser array I am trying to figure out a way around this or if I am missing something very obvious.
I am very new at this so any help would be very appreciated.
Thanks
There are a number of approaches to implementing pitch detection. This paper provides a review of them. Their conclusion is that using FFTs may not be the best way to go - however, it's unclear quite what their FFT-based algorithm actually did.
If you're simply tuning guitar strings to fixed frequencies, much simpler approaches exist. Building a fully chromatic tuner that does not know a-priori the frequency to expect is hard.
The FFT approach you're using is entirely possible (I've built a robust musical instrument tuner using this approach that is being used white-label by a number of 3rd parties). However you need a significant amount of post-processing of the FFT data.
To start, you solve the resolution problem using the Short Timer FFT (STFT) - or more precisely - a succession of them. The process is described nicely in this article.
If you intend building a tuner for guitar and bass guitar (and let's face it, everyone who asks the question here is), you'll need t least a 4092-point DFT with overlapping windows in order not to violate the nyquist rate on the bottom E1 string at ~41Hz.
You have a bunch of other algorithmic and usability hurdles to overcome. Not least, perceived pitch and the spectral peak aren't always the same. Taking the spectral peak from the STFT doesn't work reliably (this is also why the basic auto-correlation approach is also broken).

What's a best practice sampling rate for Perfmon?

What's an acceptable sampling rate with Perfmon? Obviously, the more often we sample, the more our performance sampling has an effect on the performance on the machine. I'm hoping someone out there has a good rule of thumb for such a thing.
Evidence and statistics would be even better, but I'd be happy with generally accepted best practices.
The sample rate depends of course on the time you sample and the time you want to save the data and work with it later on.
Brent Ozar who has a pretty nice video about configuring PerfMon best practices using it to monitor sql and hardware performance. He recommends 15 seconds sample rate. For long time statistics.
If you use Perfmon on a daily basis running for 6 hours it will generate nearly 1,5k of samples which is fine for me.
Brent Ozar Perfmon
If you are chasing a critical issue which is happening right now, you will lower the rate of course. But for overall collection of data a rate of 15 seconds which means 4 per minute is working well.
It always depends on what you monitor. Vmware's Suggestions to monitor virtualisation
are the following :
problem occurs hourly : sample rate 5 seconds
problem occurs daily : sample rate 90 seconds
problem occurs weekly : sample rate 15 minutes