Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I would like to implement YOLO from scratch. I have seen codes available in github but I want to try from scratch. Is it possible to implement YOLO in ordinary Python script without using dark flow? I am planning to implement it in keras.
All Kinds of neural networks can be implemented on python from scratch. If you really want to do so you can. You can use numpy library and scipy libraries for the easy calculations with vectors and matrices.
What you are going to do is a time-consuming task. Will not be easy. But if you try hard, you could do it. And don't forget to share the code with us.
First, you will need to get a basic understanding of the YOLO network. I would suggest reading the research papers. Original YOLO paper and the second paper discuss many details about the network and how it works. It will give you a better understanding of the network and how it is working and will be helpful when debugging your own network.
The third paper is easier than the other two. It will only explain the modifications that they have done. So, in order to get a full understanding of the network, you still have to read all three research papers.
Original Yolo Paper
Yolo9000 (Yolo version 2)
Yolov3
After you have downloaded the YOLO, you will find a file called yolo.cfg. You can open that file in a notepad.
At the top of the file, they have defined some hyperparameters. You can know the meaning of those parameters by reading the papers.
After that, they have described their YOLO network as caffe people do in their prototxt files. It is not exactly as same as the prototxt file, but you can get the idea. It would be very helpful when building your own network.
They have written the YOLO network in such a way that the network changes a lot when it changes the mode from training to testing. You can find all that information in their research papers. Keep that in your mind too.
Happy Coding !!!
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I need to build a deep learning model for image classifying. I need to train a deep learning model on those data and then deploying it on real machines.
In conclusion, My main problems are:
Images are very big, which leads CUDA to memory issues. what shall I do to prevent my model running out of memory limit?
Besides, I need a very fast inference, because the model will be used on real deploy environment. The speed is very important for timely response.
I need to solve both the 2 problems to deploy my model.
I think it is important to reduce the size of the images. reshape them if necessary, which can significantly reduce the memory cost.
I think you can try different batch size. Becasue batch size is directly related to training and inference speed of deep learning. But I think better GPU machine card is more important for image classifying with deep learning network.
I think you need better GPU card as deep learning is machine hungry.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I am a software development guy. Lately I was thinking of trying out some firmware development, as the company I work for is trying to enter that domain.
I have many questions regarding firmware devlopment - like:
What are the tools used - like IDE?
In which language is most of the code written in?
How to port the code into microcontroller?
How to code for different microcontrollers?
How to determine things I would need for building a specific application(choosing the microcontroller etc.)?
Anything else I should know about and where do I start? Sorry if this question is too basic, but I could not find out any satisfactory answers elsewhere.
Most microcontrollers have decent C compilers so are best coded for in C, although you might need to delve into assembly routines for occassional high performance routines. The choice of microcontroller is usually determined by the hardware demands, on board peripherals, performance and cost constraints.
You wouldn't generally be porting code from a Windows/Linux/Mac environment to a microcontroller one; you would generally be writing directly for the microcontroller, so strictly the compiler is a cross compiler - compiling on your PC to run on a different processor. You typically get debuggers, emulators and full editor capabilities in the IDE, so its a similar experience to writing code in a PC environment, but it runs slower, and has to be downloaded to the target hardware or emulated to be tested.
A great authority to start reading about embedded development is Jack Gansle and his firmware handbook. Also www.embedded.com for general articles.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
The crawler needs to have an extendable architecture to allow changing the internal process, like implementing new steps (pre-parser, parser, etc...)
I found the Heritrix Project (http://crawler.archive.org/).
But there are other nice projects like that?
Nutch is the best you can do when it comes to a free crawler. It is built off of the concept of Lucene (in an enterprise scaled manner) and is supported by the Hadoop back end using MapReduce (similar to Google) for large scale data querying. Great products! I am currently reading all about Hadoop in the new (not yet released) Hadoop in Action from manning. If you go this route I suggest getting onto their technical review team to get an early copy of this title!
These are all Java based. If you are a .net guy (like me!!) then you might be more interested in Lucene.NET, Nutch.NET, and Hadoop.NET which are all class by class and api by api ports to C#.
You May also want to try Scrapy http://scrapy.org/
It is really easy to specify and run your crawlers.
Abot is a good extensible web-crawler. Every part of the architecture is pluggable giving you complete control over its behavior. Its open source, free for commercial and personal use, written in C#.
https://github.com/sjdirect/abot
I've discovered recently one called - Nutch.
If you're not tied down to platform, I've had very good experiences with Nutch in the past.
It's written in Java and goes hand in hand with the Lucene indexer.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Can I make a difference at an open source project?
I haven't gotten a degree or anything but I am really interested in computer science and I have most of the fundamentals down.
Is there a project I can make a difference at? If not, any sites where I can further my knowledge and review the fundamentals (advanced concepts as well) of computer programming?
Scour around GitHub for projects, there are plenty that could use some help.
At the very least, write tests for untested code and submit them back. Even the littlest of contributions are appreciated.
Newcomers to an active Open Source project often feel like they are walking into a busy kitchen. A lot of different things going on and you feel like you are just in the way.
But often its not the case.
I can't point you to a specific project since i do not know your skillset or what you want to focus on.
Getting into an Open Source project can take time, its mostly based on the size of the project but usually its trying to see what is needed.
What i recommend is the same most people do, find a project that inspires you to make it better (even though its good to begin with), since that will make you want to stick around during the harder times.
Absolutely. Writing documentation and unit tests is good advice, but I'd suggest instead you find something you're particularly interested in, perhaps a piece of open source software you already use, and add a feature that you yourself want to use. It'll be more difficult, but it'll actually keep your interest and get you real world experience. Worst case your patch won't be accepted, but if it's a decent project they'll tell you why and what you need to do to make it acceptable.
Or, pick a small problem you want to see solved, and write an open source solution for it. The key is actually be interested in the problem you're solving.
Open source software is not magically high quality code; in fact it's not unusual to find sloppy code and practices. Don't be intimidated, jump in and give it a try. My first piece of open source still has a few users over 10 years later, but the code quality makes me cringe everytime I look at it.
You can visit Sourceforge.net and look for projects that need help.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
How do you estimate a EAI project using Function point?
FP analysis is inappropriate for integration projects of any sort as it presupposes that you can specify the application up-front. Most of the work in any integration project of non-trivial complexity is reverse-engineering the nuances of the environment. Typically the environment will not be exhaustively documented in the sort of cases you would expect to use an EAI system in.
By the time you have actually done this level of reverse engineering to the point of having a complete specification you have done most of the work in the project - the actual development is fairly short and sweet by comparison. Therefore the function point analysis is only providing an estimate for a small part of the system.
As an aside, much of the work I do is data warehouse systems in Commercial insurance companies, where extensive prototyping and reconciliation exercises to produce detailed specification documents are actually quite appropriate to the environment. Typically this takes longer than actually developing the production system as most of the data issues are resolved in the prototyping work. EAI systems have a similar class of implementation issues.
Well given that FP counting is based on storage and end user interface, not sure if its even meaningful for EAI (from what little I remember).
I would say you can't, at least not in a useful way. FP counting is generally viewed as a dubious practice of varying accuracy, doing it to an integration project would just add more fuzzyness.