What data is needed to be trained when detecting videos by object detection model - deep-learning

I have a task to detect illegal parking,I want to use object detection models like YOLO, SDD and others to detect them on video,Can i train it by video? or all frame from video? or any frame from video that are not similar?

Related

Mediacodec decoder issues in MTK platform?

I'm working on a project of android video player, which required decode frame in realtime. I use the Mediacodec framework for decoding. I test my app in some device, and then I found that the Mediacodec decoder in MTK Platform will block serval frames in their own buffers, which leads the frame decode latency is much bigger and cannot achieve the . For example, the decoder would dequeueOutputBuffer a frame after I dequeueInputBuffer four frames. Is there any solutions to reduce the decode latency?

Object Detection from Image Classification

Can I use a model used for Image Classification to do Object Detection? Already I spent too much time doing the image collection and distribute each class into it folder.
You can use your classification model as an initialized backbone for a detection model (e.g. Faster-RCNN) but it might not help that much compared to train your detector from scratch.
You will need to add detection layers (e.g. ROI pooling) to your backbone to perform detection.
While you can try unsupervised object detection, usually you will need extra labels such as object bounding-boxes to train your object detector.

Play a sound from bytearray in as3

I am recording sound using Microphone class. After completing the record, I am getting a byte array.
Now I want to use this byte array and play the sound. Is this possible?
Thanks.
See this manual
In short, make the microphone generate 44.1 kHz samples (mono), duplicate them in your own sample data procedure, and store them in your Sound object. 44kHz are mandatory to not have your sound pitched, as it's always played back as 44.1kHz sound. The resampling is to make mono sound into stereo sound, as you can't have mono sounds in Flash.

Obtain the result ByteArray of the current playing sounds

I am developing an AIR application for desktop that simulate a drum set. Pressing the keyboard will result in a corresponding drum sound played in the application. I have placed music notes in the application so the user will try to play a particular song.
Now I want to record the whole performance and export it to a video file, say flv. I have already succeed in recording the video using this encoder:
http://www.zeropointnine.com/blog/updated-flv-encoder-alchem/
However, this encoder does not have the ability to record sound automatically. I need to find a way to get the sound in ByteArray at that particular frame, and pass it to the encoder. Each frame may have different Sound objects playing at the same time, so I need to get the bytes of the final sound.
I am aware that SoundMixer.computeSpectrum() can return the current sound in bytes. However, the ByteArray returned has a fixed length of 512, which does not fit in the requirement of the encoder. After a bit of testing, with a sample rate 44khz 8 bit stero, the encoder expects the audio byte data array to have a length of 5880. The data returned by SoundMixer.computeSpectrum() is much much shorter than the encoder required.
My application is running at 60FPS, and recording at 15FPS.
So my question is: Is there any way I can obtain the audio bytes in the current frame, which is mixed by more than one Sound objects, and has the data length enough for the encoder to work? If there is no API to do that, I will have to mix the audio and get the result bytes by myself, how can that be done?

Adobe AIR - Garbage collection and system.gc()

I'm building an Adobe AIR desktop app with Flash CS5 that makes a lot of use of bitmapdata, bytearrays and base64 strings. After a while the memory usage of the app doubles.
Is it recommended to use system.gc() to free memory at that point or is that bad practice?
Thanks.
system.gc is a debug only functionality in AIR and Flash player. I think the better thing is to recycle bitmapdata and other objects if you can to avoid gc, and if not call bitmapdata.dispose() and bitmapdata = null as soon as you are done with using them.
If you have bitmap objects of the same size at various times in your project, you can use the same instance of BitmapData to operate on them. This is similar to how ItemRenderers recycle items or how even other platforms like iOS's UITableViewController recycles/reuses UITableViewCell. Garbage collection is not panacea, it should be used when easy programmability is more important than performance.
You don't need to call system.gc as it will be called automatically on idle cycles by the Flash runtime. If you call it yourself you might end up slowing down your application for no real gain.
When you don't need a BitmapData or a ByteArray anymore, just call BitmapData.dispose() or ByteArray.clear().