How to measure robustness and stress? - stress-testing

I'm working on an investigation on robustness and stress metrics, but I can't really find useful information. I did see that MTBF is an option for robustness on this question: How to measure robustness?. But I was wondering if there are any other metrics that can be used for measuring robustness and which metrics can we used for measuring stress.

Good Question. Software is different to electronics or mechanics because software incidents are not made by random faults (electronics) or wearing (mechanics), they are made by developers and ops engineers. This means you can't use MTBF to measure software robustness.
Two ideas to measure robustness:
calculate the availability of the software system - robustness have a direct impact to the software availability
calculate the robustness test coverage, analyze your historic incidents an try to replay them (daily) e.g. with your chaos monkey
Blog article in German

Related

Q-Learning Intermediate Rewards

If a Q-Learning agent actually performs noticeably better against opponents in a specific card game when intermediate rewards are included, would this show a flaw in the algorithm or a flaw in its implementation?
It's difficult to answer this question without more specific information about the Q-Learning agent. You might term the seeking of immediate rewards as being the exploitation rate, which is generally inversely proportional to the exploration rate. It should be possible to configure this and the learning rate in your implementation. The other important factor is the choice of exploration strategy and you should not have any difficulty in finding resources that will assist in making this choice. For example:
http://www.ai.rug.nl/~mwiering/GROUP/ARTICLES/Exploration_QLearning.pdf
https://www.cs.mcgill.ca/~vkules/bandits.pdf
To answer the question directly, it may be either a question of implementation, configuration, agent architecture or learning strategy that leads to immediate exploitation and a fixation on local minima.

What does the term "Real-Time Software Development refer to?

I saw a job description with the term Real-Time Software Development:
Software Engineers at Boeing develop solutions that provide world class performance and capability to customers around the world.
Boeing Defense, Space and Security in St. Louis is looking for
software engineers to join the growing and talented teams developing
modeling and simulation software for a variety of applications,
including flight control and aerodynamic performance, weapon and
sensor systems, simulation tools and more. The software is integrated
with live assets to enable a next-generation virtual battle
environment to explore new system concepts and optimal engineering
solutions.
Our software engineers are responsible for full life-cycle software development which means you will have a hand in defining the
requirements; designing, implementing and testing the software. You
will work with a team in a casual but professional environment where
there is long-term potential for career growth into management or
technical leadership positions.
**Languages & Databases**
Real-time SW Development Tool
Real-time Target Environment
Job:*Software Engineer
I can't figure out what that means in this context, what does Real Time Software development mean?
The links in comments give some useful information. The real problem with Real Time is that there are far less usages than ordinary scientific or data processing applications and so less specialists around.
I used a Real Time development environment many years ago, a a friend of mine used another one more recently. The generic characteristics were :
the development system is an IDE more or less like any other IDE
you have the ability to get the precise time that will last any routine, because if you use a RT system, it is because you need deterministic processing times
you have an emulator that allows you to run the program or more exactly simulate it running on the real system with different inputs (including hardware inputs) and control both the outputs and the times
you generally mix high level programming (C or others) for non critical parts and low level assembly routines in time critical parts.
The remaining really depended on the simulated system.
Real time in this context means software that always run in the same time. Normal server and desktop OSes such as Mac, Linux, and Windows have multitasking without exact scheduling, making it impossible to say exactly how long time it will take for a piece of code to run. In a real time OS, the time it will take a piece of code is always the same.
This is used in space craft, aircraft and similar areas.
Not to be confused with real time processing speed, eg. encoding video in real time means to encode it as fast as the frames are coming.

Financial applications on GPGPU

I want to know what sort of financial applications can be implemented using a GPGPU. I'm aware of Option pricing/ Stock price estimation using Monte Carlo simulation on GPGPU using CUDA. Can someone enumerate the various possibilities of utilizing GPGPU for any application in Finance domain,
There are many financial applications that can be run on the GPU in various fields, including pricing and risk. There are some links from NVIDIA's Computational Finance page.
It's true that Monte Carlo is the most obvious starting point for many people. Monte Carlo is a very broad class of applications many of which are amenable to the GPU. Also many lattice based problems can be run on the GPU. Explicit finite difference methods run well and are simple to implement, many examples on NVIDIA's site as well as in the SDK, it's also used in Oil & Gas codes a lot so plenty of material. Implicit finite difference methods can also work well depending on the exact nature of the problem, Mike Giles has a 3D ADI solver on his site which also has other useful finance stuff.
GPUs are also good for linear algebra type problems, especially where you can leave the data on the GPU to do reasonable work. NVIDIA provide cuBLAS with the CUDA Toolkit and you can get cuLAPACK too.
Basically, anything that requires a lot of parallel mathematics to run. As you originally stated, Monte Carlo simultation of options that cannot be priced with closed-form solutions are excellent candidates. Anything that involves large matrixes and operations upon them will be ideal; after all, 3D graphics use alot of matrix mathematics.
Given that many trader desktops sometimes have 'workstation' class GPUs in order to drive several monitors, possibly with video feeds, limited 3D graphics (volatility surfaces, etc) it would make sense to run some of the pricing analytics on the GPU, rather than pushing the responsibility onto a compute grid; in my experience the compute grids are frequently struggling under the weight of EVERYONE in the bank trying to use them, and some of the grid computing products leave alot to be desired.
Outside of this particular problem, there's not a great deal more that can be easily achieved with GPUs, because the instruction set and pipelines are more limited in their functional scope compared to a regular CISC CPU.
The problem with adoption has been one of standardisation; NVidia had CUDA, ATI had Stream. Most banks have enough vendor lock-in to deal with without hooking their derivative analytics (which many regard as extremely sensitive IP) into a gfx card vendor's acceleration technology. I suppose with the availability of OpenCL as an open standard this may change.
F# is used a lot in finance, so you might check out these links
http://blogs.msdn.com/satnam_singh/archive/2009/12/15/gpgpu-and-x64-multicore-programming-with-accelerator-from-f.aspx
http://tomasp.net/blog/accelerator-intro.aspx
High-end GPUs are starting to offer ECC memory (a serious consideration for financial and, eh, military applications) and high-precision types.
But it really is all about Monte Carlo at the moment.
You can go to workshops on it, and from their descriptions see that it'll focus on Monte Carlo.
A good start would be probably to check NVIDIA's website:
CUDA's Finance Showcases
CUDA's Finance Tutorials
Using a GPU introduces limitations to architecture, deployment and maintenance of your app.
Think twice before you invest efforts in such solution.
E.g. if you're running in virtual environment, it would require all physical machines to have GPU hardware installed and a special vGPU hardware and software support + licenses.
What if you decide to host your service in the cloud (e.g. Azure, Amazon)?
In many cases it is worth building your architecture in advance to support scale out and be flexible and scalable (with some overhead of course) rather than scale up and squeeze as much as you can from your hardware.
Answering the complement of your question: anything that involves accounting can't be done on GPGPU (or binary floating point, for that matter)

How to estimate FPGA utilization for designing a work a like core?

I was considering some older generation FPGA's to interface with a legacy system. So I want a good way of estimating how much space is necessary to replace an ASIC given its transistor count.
Does Verilog versus VHDL affect the utilization? (According to one of our contractors it affects the timing, so utilization seems likely.)
What effect do different vendor's parts have on it? (Actel's architecture is significantly different from Xilinx', for example. I expect some "weighting" based on this.)
This discussion originally from comp.arch.fpga seems to indicate that it's pretty complicated, including factors such as what space vs. speed tradeoffs you've asked the VHDL (or verilog) compiler to make, etc. When you consider that VHDL is source code and an FPGA implementation of it is object code, you'll see why it's not straightforward.
"FPGA vs. ASIC" notes that "a design created to work well on an FPGA is usually horrible on an ASIC and a design created for an ASIC may not work at all on an FPGA (certainly at the original frequency)".
A Google search for FPGA ASIC gates may have more useful info.
Verilog vs. VHDL has little real difference on speed or utilization. It is more related to amount of code you have to type (more for VHDL) and strong vs weak-typing.
The marketing gates for FPGA vendors are inflated. Altera vs. Xilinx are similar utilization. Look at memories (if memory intensive) and number of flip-flops; that will likely be good enough.
Consider what a similar core requires, for example if you need to do an error-coding core, look at a Reed-Solomon core.

Feasibility of GPU as a CPU? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
What do you think the future of GPU as a CPU initiatives like CUDA are? Do you think they are going to become mainstream and be the next adopted fad in the industry? Apple is building a new framework for using the GPU to do CPU tasks and there has been alot of success in the Nvidias CUDA project in the sciences. Would you suggest that a student commit time into this field?
Commit time if you are interested in scientific and parallel computing. Don't think of CUDA and making a GPU appear as a CPU. It only allows a more direct method of programming GPUs than older GPGPU programming techniques.
General purpose CPUs derive their ability to work well on a wide variety of tasks from all the work that has gone into branch prediction, pipelining, superscaler, etc. This makes it possible for them to achieve good performance on a wide variety of workloads, while making them suck at high-throughput memory intensive floating point operations.
GPUs were originally designed to do one thing, and do it very, very well. Graphics operations are inherently parallel. You can calculate the colour of all pixels on the screen at the same time, because there are no data dependencies between the results. Additionally, the algorithms needed did not have to deal with branches, since nearly any branch that would be required could be achieved by setting a co-efficient to zero or one. The hardware could therefore be very simple. It is not necessary to worry about branch prediction, and instead of making a processor superscaler, you can simply add as many ALU's as you can cram on the chip.
With programmable texture and vertex shaders, GPU's gained a path to general programmability, but they are still limited by the hardware, which is still designed for high throughput floating point operations. Some additional circuitry will probably be added to enable more general purpose computation, but only up to a point. Anything that compromises the ability of a GPU to do graphics won't make it in. After all, GPU companies are still in the graphics business and the target market is still gamers and people who need high end visualization.
The GPGPU market is still a drop in the bucket, and to a certain extent will remain so. After all, "it looks pretty" is a much lower standard to meet than "100% guaranteed and reproducible results, every time."
So in short, GPU's will never be feasible as CPU's. They are simply designed for different kinds of workloads. I expect GPU's will gain features that make them useful for quickly solving a wider variety of problems, but they will always be graphics processing units first and foremost.
It will always be important to always match the problem you have with the most appropriate tool you have to solve it.
Long-term I think that the GPU will cease to exist, as general purpose processors evolve to take over those functions. Intel's Larrabee is the first step. History has shown that betting against x86 is a bad idea.
Study of massively parallel architectures and vector processing will still be useful.
First of all I don't think this questions really belongs on SO.
In my opinion the GPU is a very interesting alternative whenever you do vector-based float mathematics. However this translates to: It will not become mainstream. Most mainstream (Desktop) applications do very few floating-point calculations.
It has already gained traction in games (physics-engines) and in scientific calculations. If you consider any of those two as "mainstream", than yes, the GPU will become mainstream.
I would not consider these two as mainstream and I therefore think, the GPU will raise to be the next adopted fad in the mainstream industry.
If you, as a student have any interest in heavily physics based scientific calculations, you should absolutely commit some time to it (GPUs are very interesting pieces of hardware anyway).
GPU's will never supplant CPU's. A CPU executes a set of sequential instructions, and a GPU does a very specific type of calculation in parallel. These GPU's have great utility in numerical computing and graphics; however, most programs can in no way utilize this flavor of computing.
You will soon begin seeing new processers from Intel and AMD that include GPU-esque floating point vector computations as well as standard CPU computations.
I think it's the right way to go.
Considering that GPUs have been tapped to create cheap supercomputers, it appears to be the natural evolution of things. With so much computing power and R&D already done for you, why not exploit the available technology?
So go ahead and do it. It will make for some cool research, as well as a legit reason to buy that high-end graphic card so you can play Crysis and Assassin's Creed on full graphic detail ;)
Its one of those things that you see 1 or 2 applications for, but soon enough someone will come up with a 'killer app' that figures out how to do something more generally useful with it, at superfast speeds.
Pixel shaders to apply routines to large arrays of float values, maybe we'll see some GIS coverage applications or well, I don't know. If you don't devote more time to it than I have then you'll have the same level of insight as me - ie little!
I have a feeling it could be a really big thing, as do Intel and S3, maybe it just needs 1 little tweak adding to the hardware, or someone with a lightbulb above their head.
With so much untapped power I cannot see how it would go unused for too long. The question is, though, how the GPU will be used for this. CUDA seems to be a good guess for now but other techologies are emerging on the horizon which might make it more approachable by the average developer.
Apple have recently announced OpenCL which they claim is much more than CUDA, yet quite simple. I'm not sure what exactly to make of that but the khronos group (The guys working on the OpenGL standard) are working on the OpenCL standard, and is trying to make it highly interoperable with OpenGL. This might lead to a technology which is better suited for normal software development.
It's an interesting subject and, incidentally, I'm about to start my master thesis on the subject of how best to make the GPU power available to the average developers (if possible) with CUDA as the main focus.
A long time ago, it was really hard to do floating point calculations (thousands/millions of cycles of emulation per instruction on terribly performing (by today's standards) CPUs like the 80386). People that needed floating point performance could get an FPU (for example, the 80387. The old FPU were fairly tightly integrated into the CPU's operation, but they were external. Later on they became integrated, with the 80486 having an FPU built-in.
The old-time FPU is analagous to GPU computation. We can already get it with AMD's APUs. An APU is a CPU with a GPU built into it.
So, I think the actual answer to your question is, GPU's won't become CPUs, instead CPU's will have a GPU built in.