CUDA in BulletPhysics / BulletSharp? - bulletphysics

Could BulletSharp (or BulletPhysics itself, if you don't know about BulletSharp) use CUDA? If so, where could I find appropriate settings? (like on/off CUDA, e.t.c.)
I found old info, that there was experiments with using CUDA in Bullet. But can't find info about actual state. (and didn't find any mention in BulletSharp or BulletPhysics code)
note: BulletSharp is C# wrapper for BulletPhysics.
Thank you for any information

I think, #Andres Traks answered this question also in his answer to this one
Namely:
Work on the GPU pipeline in Bullet 3 seems to have stopped, so there are currently no plans to support version 3 in BulletSharp.

Related

AVR-Studio how to output?

I do not have experience with micorcontrollers but I have something related to them. Here is and explanation of my issue:
I have an algorithm, and I want to calculate how many cycles my algorithm would cost on a specific avr microcontroller.
To do that I downloaded AVR-STudio 6, and I used the simulator. I succeeded in obtaining the number of cycles for my algorithm. What I wan to know is that how can I make sure that my algorithm is working as it should be. AVR-Studio allows me to debug using the simulator but I am not able to see the output of my algorithm.
To simplify my question, I would like some help in implementing the hello world example in AVR-Studio, that is I want to see "hello world" in the output window, if that is possible.
My question is not how to program the microcontroller, my question is that how could I see the output of a program in AVR-Studio.
Many thanks
As Hanno Binder suggested in his comment:
Atmel Studio still does not provide any means to display debug messages sent by the program simulated. Your only option is to place breakpoints at apropriate locations and then inspect the state of the device in the simulator. For example the locations in RAM where your result is stored, or the registers in which it may reside; maybe have a 'watch' set on a variable or expression.
I think this is the best answer, watch vairables and memory while in debug mode.
Note: turn off optimization when you want to debug for infomation, or some variables will be optimized away.
the best thing to test if algorithms work is to run them in a regular PC program and feed them with data and compare the results with ground trouth.
Clearly to be able to do this a good programming style is neccessary that separates hardware related tasks from the actual data processing. Additionally you have to keep architectural differences in mind (eg: int=16bit vs. int=32bit --> use inttypes.h)

face recognition as3

I want to build flash application that can detect the user eyes color and hair color etc'
Does anyone know about free library that I can use for this kind of project?
Thanks,
Perhaps you are looking for this library:
http://code.google.com/p/face-recognition-library-as3/
Never tried it myself, but this demo looks promising.
shaunhusain I think that you mistook face detection for face recognition although face-recognition-library-as3 enables both. Comments in source files of library are in Polish for now, but there is documentation in English available online for that library.
In answer to main question of this thread it should be possible to detect only eyes using this library. To do that you should replace HaarCascades in face.zip file to those for detecting eyes, which are part of OpenCV. To detect hair color you could detect face and then analyze pixels just above detected region with face.
Hope that helps.
This kind of visual processing is generally too intense to handle within the single thread and VM that AS3 provides, it's a task better suited to a language that compiles to machine code and has threading capabilities such as C or C++.
Here's something related to the topic, I believe you would be better off just trying to use OpenCV, but it should also contain the appropriate algorithms to port if you have the time and mental capacity to do so: http://www.quasimondo.com/archives/000687.php
Alternatively to avoid all the leg-work you may want to consider using a server side solution like http://face.com/

Developing using pre-release dev tools

We're developing a web site. One of the development tools we're using has an alpha release available of its next version which includes a number of features which we really want to use (ie they'd save us from having to implement thousands of lines to do pretty much exactly the same thing anyway).
I've done some initial evaluations on it and I like what I see. The question is, should we start actually using it for real? ie beyond just evaluating it, actually using it for our development and relying on it?
As alpha software, it obviously isn't ready for release yet... but then nor is our own code. It is open source, and we have the skills needed to debug it, so we could in theory actually contribute bug fixes back.
But on the other hand, we don't know what the release schedule for it is (they haven't published one yet), and while I feel okay developing with it, I wouldn't be so sure about using it in production so if it isn't ready before we are then it may delay our own launch.
What do you think? Is it worth taking the risk? Do you have any experiences (good or bad) of similar situations?
[EDIT]
I've deliberately not specified the language we're using or the dev-tool in question in order to keep the scope of the question broad, as I feel it's a question that can apply to pretty much any dev environment.
[EDIT2]
Thank you to Marjan for the very helpful reply. I was hoping for more responses though, so I'm putting a bounty on this.
I've had experience contributing to an open source project once, like you said you hope to contribute. They ignored the patch for one year (they have customers to attend of course, although they don't sell the software but the support). After one year, they rejected the patch with no alternative solution to the problem, and without a sound foundation to do that. It was just out of their scope at that time, I guess.
In your situation I would try to solve one or two of their not-so-high priority, already reported bugs and see how responsive they are, and then decide. Because your success on deadlines will be compromised to theirs. If you have to maintain a copy of their artifacts, that's guaranteed pain.
In short: not only evaluate the product, evaluate the producers.
Regards.
My personal take on this: don't. If they don't come through for you in your time scale, you're stuck and will still have to put in the thousands of lines yourself and probably under a heavy time restriction.
Having said that, there is one way I see you could try and have your cake and eat it too.
If you see a way to abstract it out, that is to insulate your own code from the library's, for example using adapter or facade patterns, then go ahead and use the alpha for development. But determine beforehand what the latest date is according to your release schedule that you should start developing your own thousands of lines version behind the adapter/facade. If the alpha hasn't turned into an RC by then: grin and bear it and develop your own.
It depends.
For opensource environments it depends more on the quality of the release than the label (alpha/beta/stable) it has. I've worked with alpha code that is rock solid compared to alleged production code from another producer.
If you've got the source then you can fix the any bugs, whereas with closed source (usually commercially supported) you could never release production code built with a beta product because it's unsupported by the vendor who has the code, and so you can't fix it.
So in your position I'd be assessing the quality of the alpha version and then deciding if that could go into production.
Of course all of the above doesn't apply to anything even remotely safety critical.
It is just a question of managing risks. In open source, alpha release can mean a lot of different things. You need to be prepared to:
handle API changes;
provide bug fixes and workarounds;
test stability, performance and scalability yourself;
track changes much more closely, and decide whether to adopt then yet;
track the progress they are making and their responsiveness to patches/issues.
You do use continuous integration, do you?

Open source expert system [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a book, tool, software library, tutorial or other off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
does anyone know about a open source expert system? actually, I'm rather interested in calling its inferential engine from C#.
Both CLIPS and JESS are already mentioned in other answers, so I will supply this link to CLIPS versus JESS:
http://www.comp.lancs.ac.uk/~kristof/research/notes/clipsvsjess/
It was written June 4, 1999, and at that time the advantage was clearly with CLIPS.
If you don't want to read it all, here are the conclusions:
Chapter 3 The conclusions
Both CLIPS and JESS are products with a large support on the internet,
but CLIPS seems to have a broader audience, probably because it exists
longer. This difference in age results in the CLIPS package being more
stable and complete, while JESS users will still experience some minor
bugs. JESS is constantly updated and the author, Ernest Friedman-Hill,
has been very responsive to user/developer feedback and regularly puts
out new releases and bug fixes.
Nowadays, the choice between JESS and CLIPS depends on the
application. If it is web-based or should reside in applet-form, the
choice of JESS is a very logical one (which is even supported by the
authors of CLIPS). For the more classic applications, CLIPS will
probably be chosen because of its reputation of being more stable and
having more support.
The future of JESS depends highly on the evolution of the web, the
Java programming language and its own future stability. These three
conditions make that there is a great possibility that JESS will
become more popular and more frequently used. Especially the
object-oriented possibilities and the easy integration into Java code
makes JESS’ future very promising.
CLIPS, on the other hand, is more likely to implement the new and
sophisticated features first as they come out, since it still has the
advantage in time. CLIPS has also various extensions and variants(like
FuzzyCLIPS, AGENT CLIPS, DYNACLIPS, KnowExec, CAPE, PerlCLIPS, wxCLIPS
and EHSIS to name a few) that give it an advantage with respect to
support of methods like fuzzy logic and agents.
The multifunctional developing environment of CLIPS for operating
systems that support windows is also an advantage, while JESS has just
one window with two buttons (‘clear window’ and ‘quit’), without a
menu. Figures 1 and 2 depict both environments.
To summarize, CLIPS is still more complete and stable than JESS, but
this might change in the future, since the JESS package is being
improved constantly. Besides that, JESS has also the property of using
Java, which in the long run might prove to be a big advantage over
CLIPS.
These links may also be of interest:
http://en.wikipedia.org/wiki/CLIPS
Commercial & Freeware Expert System Shells
http://www.kbsc.com/rulebase.html
Are there open source expert systems with reasoning capabilities?
I went through the same process, about a year ago, trying to find a good .Net system for this. I recall finding a few decent engines, but they were all too general, and required too many assumptions.
In the end I found that writing my own system was pretty easy to do, and it did exactly what I wanted it to, without any extra bull to make it work with some abstract generalized engine.
It might help to know what your intended use is.
Take a look at CLIPS -- it is coded in C.
There's more info on CLIPS at Wikipedia.
If you'd consider a rule-processing engine, JBoss Rules (also known as Drools) is the best that I know of. Open Source and free. It's written in Java, but designed for integration. You can incorporate objects in the rules and rule-base applications in your components. You can even build or modify rule-bases on the fly.
AI::ExpertSystem::Advanced or AI::ExpertSystem::Simple is a Perl solution.
You can try JESS, but it is Java-based. Amzilogic also provide a good platform.

What code metric(s) convince you that provided code is "crappy"? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Code lines per file, methods per class, cyclomatic complexity and so on. Developers resist and workaround most if not all of them! There is a good Joel article on it (no time to find it now).
What code metric(s) you recommend for use to automatically identify "crappy code"?
What can convince most (you can't convince all of us to some crappy metric! :O) ) of developers that this code is "crap".
Only metrics that can be automatically measured counts!
Not an automated solution, but I find WTF's per minute useful.
(source: osnews.com)
No metrics regarding coding-style are part of such a warning.
For me it is about static analysis of the code, which can truly be 'on' all the time:
cyclomatic complexity (detected by checkstyle)
dependency cycle detection (through findbugs for instance)
critical errors detected by, for instance findbugs.
I would put coverage test in a second step, as such tests can take time.
Do not forget that "crappy" code are not detected by metrics, but by the combination and evolution (as in "trend) of metrics: see the What is the fascination with code metrics? question.
That means you do not have just to recommend code metrics to "automatically identify "crappy code"", but you also have to recommend the right combination and trend analysis to go along those metrics.
On a sidenote, I do share your frustration ;), and I do not share the point of view of tloach (in the comments of another answers) "Ask a vague question, get a vague answer" he says... your question deserve a specific answer.
Number of warnings the compiler spits out when I do a build.
Number of commented out lines per line of production code. Generally it indicates a sloppy programmer that doesn't understand version control.
Developers are always concerned with metrics being used against them and calling "crappy" code is not a good start. This is important because if you are worried about your developers gaming around them then don't use the metrics for anything that is to their advantage/disadvantage.
The way this works best is don't let the metric tell you where the code is crappy but use the metric to determine where you need to look. You look by having a code review and the decision of how to fix the issue is between the developer and the reviewer. I would also error on the side of the developer against the metric. If the code is still popping on the metric but the reviewers think it is good, leave it alone.
But it is important to keep in mind this gaming effect when your metrics start to improve. Great, I now have 100% coverage but are the unit tests any good? The metric tells me I am ok, but I still need to check it out and look at what got us there.
Bottom line, the human trumps the machine.
number of global variables.
Non-existent tests (revealed by code coverage). It's not necessarily an indicator that the code is bad, but it's a big warning sign.
Profanity in comments.
Metrics alone do not identify crappy code. However they can identify suspicious code.
There are a lot of metrics for OO software. Some of them can be very useful:
Average method size (both in LOC/Statements or complexity). Large methods can be a sign of bad design.
Number of methods overridden by a subclass. A large number indicates bad class design.
Specialization index (number of overridden methods * nesting level / total number of methods). High numbers indicate possible problems in the class diagram.
There are a lot more viable metrics, and they can be calculated using tools. This can be a nice help in identifying crappy code.
global variables
magic numbers
code/comment ratio
heavy coupling (for example, in C++ you can measure this by looking at class relations or number of cpp/header files that cross-include each other
const_cast or other types of casting within the same code-base (not w/ external libs)
large portions of code commented-out and left in there
My personal favourite warning flag: comment free code. Usually means the coder hasn't stopped to think about it; plus it automatically makes it hard to understand, so ups the crappy ratio.
At first sight: cargo cult application of code idioms.
As soon as I have a closer look: obvious bugs and misconceptions by the programmer.
My bet: combination of cyclomatic complexity(CC) and code coverage from automated tests(TC).
CC | TC
2 | 0% - good anyway, cyclomatic complexity too small
10 | 70% - good
10 | 50% - could be better
10 | 20% - bad
20 | 85% - good
20 | 70% - could be better
20 | 50% - bad
...
crap4j - possible tool (for java) and concept explanation ... in search for C# friendly tool :(
Number of worthless comments to meaningful comments:
'Set i to 1'
Dim i as Integer = 1
I don't believe there is any such metric. With the exception of code that actually doesn't do what it's supposed to (which is a whole extra level of crappiness) 'crappy' code means code that is hard to maintain. That usually means it's hard for the maintainer to understand, which is always to some extent a subjective thing, just like bad writing. Of course there are cases where everyone agrees the writing (or the code) is crappy, but it's very hard to write a metric for it.
Plus everything is relative. Code doing a highly complex function, in minimal memory, optimized for every last cycle of speed, will look very bad compared with a simple function under no restrictions. But it's not crappy - it's just doing what it has to.
Unfortunately there is not a metric that I know of. Something to keep in mind is no matter what you choose the programmers will game the system to make their code look good. I have seen that everywhere any kind of "automatic" metric is put into place.
A lot of conversions to and from strings. Generally it's a sign that the developer isn't clear about what's going on and is merely trying random things until something works. For example, I've often seen code like this:
object num = GetABoxedInt();
// long myLong = (long) num; // throws exception
long myLong = Int64.Parse(num.ToString());
when what they really wanted was:
long myLong = (long)(int)num;
I am surprised no one has mentioned crap4j.
Watch out for ratio of Pattern classes vs. standard classes. A high ratio would indicate Patternitis
Check for magic numbers not defined as constants
Use a pattern matching utility to detect potentially duplicated code
Sometimes, you just know it when you see it. For example, this morning I saw:
void mdLicense::SetWindows(bool Option) {
_windows = (Option ? true: false);
}
I just had to ask myself 'why would anyone ever do this?'.
Code coverage has some value, but otherwise I tend to rely more on code profiling to tell if the code is crappy.
Ratio of comments that include profanity to comments that don't.
Higher = better code.
Lines of comments / Lines of code
value > 1 -> bad (too many comments)
value < 0.1 -> bad (not enough comments)
Adjust numeric values according to your own experience ;-)
I take a multi-tiered approach with the first tier being reasonable readability offset only by the complexity of the problem being solved. If it can't pass the readability test I usually consider the code less than good.
TODO: comments in production code. Simply shows that the developer does not execute tasks to completion.
Methods with 30 arguments. On a web service. That is all.
Well, there are various different ways you could use to point out whether or not a code is a good code. Following are some of those:
Cohesiveness: Well, the block of code, whether class or a method, if found to be serving multiple functionality, then the code can be found to be lower in cohesiveness. The code lower in cohesiveness can be termed as low in re-usability. This can further be termed as code lower in maintainability.
Code complexity: One can use McCabe cyclomatic complexity (no. of decision points) to determine the code complexity. The code complexity being high can be used to represent code with less usability (difficult to read & understand).
Documentation: Code with not enough document can also attribute to low software quality from the perspective of usability of the code.
Check out following page to read about checklist for code review.
This hilarious blog post on The Code C.R.A.P Metric could be useful.