What are the equivalent commands to expand, factor and simplify on Octave?
example:
expand((x+1)^2);
x^2+2x+1
simplify(3*x^2+2);
Well, I think the real problem is that Octave is not ment to support much of symbolic math as Wolfram Mathematica etc does. The best way would be:
1. Rethink your problem. Is there really a inevitable need for these functions? (And is it necessary to solve it using Octave?)
2. If your problem is about polynomes only (as your example would suggest), work just with vectors containing its coefficients. For this kind of purposes Octave actually has some functions, see: https://www.gnu.org/software/octave/doc/interpreter/Polynomial-Manipulations.html
You may check this out: http://octave.sourceforge.net/symbolic/index.html . It seems like having expand only.
Related
I try to code floating adder;
https://github.com/ElectronNest/FPU/blob/master/FloatAdd.scala
This is half way.
The normalization is huge code part, so I would like to use for-loop or some equivalent representation method.
Is it possible to use loop or we need strict coding?
Best,
S.Takano
This is a very general and large question. The equivalent of a for loop in hardware can be implemented using a number of techniques, pretty much all of them involving registers to hold state information. Looking at your code I would suggest that you start a little smaller and work on syntax, I see many syntax errors currently. I use IntelliJ community edition as an editor because it does a great job with helping to get the code properly structured. I also would strongly recommend starting from the chisel-template repository. It has the proper layout and examples of a working circuit and unit testing harness. Then start with a smaller implementation that does something simple like just pass input to output and runs in a test harness, then slowly build up the circuit to achieve your goals.
Good luck!
Welcome and thank you for your interest in Chisel!
I would like to echo Chick's suggestion to start from something small that compiles and simulates and build up from there. In particular, the linked code above conflates some Scala vs. Chisel constructs (eg. Scala's if else, vs. Chisel's when, .elsewhen, .otherwise), as well as some Verilog vs. Chisel concepts (eg. bit indexing with [high:low] vs. Chisel's (high, low))
In case you haven't seen it, I would suggest taking a look at the Chisel Bootcamp which helps explain how to use constructs like for loops to generate hardware.
I'll also plug my own responses to this question on the chisel-users mailing list where I tried to explain some of the intuition behind writing Chisel generators, including differentiating if and when and using for loops.
Eigen is an awesome algebra/matrix computation c++ library and I'm using it in a developing project. But someone told me not to use it because it depends on standard containers, which is doubtful to me. The reason not to use standard containers is complicated and we just ignore it for now. My question is, does eigen's implementation really depends on the standard containers? I've searched on the Eigen homepage but nothing found. Can anyone help me?
I would rather say no as there are only two very marginal use:
The first one is in IncompleteCholesky where std::vector and std::list are used to hold some temporary objects during the computation, not as member. This class is only used if a user explicitly uses it.
The second one is in SuperLUSupport module, which is a module to support a third library. Again, you cannot use accidentally!
The StlSupport module mentioned by Avi is just a helper module to ease the storage of Eigen's matrices within STL containers.
Yes, but a very little bit. You may not even need those parts, depending on your precise use. You can run a quick grep to see exactly what std:: containers are used and where. In 3.3.0, there is a std::vector member as well as a std::list<>::iterator in ./src/IterativeLinearSolvers/IncompleteCholesky.h, std::vectors are typically used as input for sparse matrices (SparseMatrix::setFromTriplets, although it really needs the iterators).
There is also the ./src/StlSupport/ directory, but I'm not sure that's what you don't want.
I'm trying to accelerate a piece of code using cuda fortran. This code uses the common statement in the definition of the variables which is not valid in the device code with cuda.
What I did is define the variables in a module instead of using the common statement but this gives me a wrong answer. I'm doing all of these on normal code in order to find a substitute to the common statement.
Code(common)
Code(without common)
I think it should work this way, because these variables are only used by these functions, but it doesn't. Why is that? And what can I do to fix this problem?
After taking a look at your files, I see that you are using OpenACC for Fortran, which is not what I would call CUDA Fortran. I will assume that that is your intent, and that you are not actually intending to use CUDA fortran, but instead you are trying to make the OpenACC code work correctly.
I have 2 suggestions.
Be specific. Which variables, which functions are not working correctly, and what are the results you are getting and what are the results you are expecting? The best scenario would be to provide a short complete, compilable example, rather than just dumping entire files of code into a question. Narrow your problem down to a specific example of something that is not working.
Again, assuming your intent is to use OpenACC fortran, you have already demonstrated that you have at least some idea of how to use the !acc kernels directive. I took a quick look at your code, and the loops you were encasing did not look terribly complicated. My suggestion is that you identify all of the data that is required (input) to these loops and generated (output) from these loops, and include additional !acc data directives, to specify these as copyin for input data and copyout for output data. A specific example/tutorial is given here. Having said that, as long as the data is in scope when the compiler is attempting to use it in an !acc kernels region, I don't think you should be getting incorrect results. But to pursue this further, I think a specific example would be appropriate. In general, use of the !acc data directive will help you to focus your attention on the data needed and make sure the compiler understands how to transfer it to/from the device and when.
And as I mentioned already, please paste code examples that you want others to look at in the actual question, rather than including links.
Could you tell me a fast and accurate method to calcuate BesselK(mu,z), BesselI(mu,z), where mu is real number?
There are several ways to represent the Bessel functions and none of them are easy to do mathematically or closed-form. However, this paper seems to be a recent approach to evaluating them efficiently:
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.120.6055
It seems to be a better way than the old-school method from the 1950s:
http://www.ams.org/journals/mcom/1959-13-066/S0025-5718-1959-0105794-5/S0025-5718-1959-0105794-5.pdf
I haven't had time to extensively and exhaustively test it myself apart from seeing that it works, but the Colt library for Java includes integer-order Bessel function in the cern.jet.math package.
https://dst.lbl.gov/ACSSoftware/colt/
I also highly endorse the Digital Library of Mathematical Functions:
http://dlmf.nist.gov/
Hm, this is language - agnostic, I would prefer doing it in C# or F#, but I'm more interested this time in the question "how would that work anyway".
What I want to accomplish ist:
a) I want to LEARN it - it's about my ego this time, it's for a fun project where I want to show myself that I'm a really good at this stuff
b) I know a tiny little bit about EBNF (although I don't know yet, how operator precedence works in EBNF - Irony.NET does it right, I checked the examples, but this is a bit ominous to me)
c) My parser should be able to take this: 5 * (3 + (2 - 9 * (5 / 7)) + 9) for example and give me the right results
d) To be quite frankly, this seems to be the biggest problem in writing a compiler or even an interpreter for me. I would have no problem generating even 64 bit assembler code (I CAN write assembler manually), but the formula parser...
e) Another thought: even simple computers (like my old Sharp 1246S with only about 2kB of RAM) can do that... it can't be THAT hard, right? And even very, very old programming languages have formula evaluation... BASIC is from 1964 and they already could calculate the kind of formula I presented as an example
f) A few ideas, a few inspirations would be really enough - I just have no clue how to do operator precedence and the parentheses - I DO, however, know that it involves an AST and that many people use a stack
So, what do you think?
You should go learn about Recursive Descent parsers.
Check out a Code Golf exercise in doing just this, 10 different ways:
Code Golf: Mathematical expression evaluator (that respects PEMDAS)
Several of these "golf" solutions are recursive descent parsers just coded in different ways.
You'll find that doing just expression parsing is by far the easiest thing in a compiler. Parsing the rest of the language is harder, but understanding how the code elements interact and how to generate good code is far more difficult.
You may also be interested in how to express a parser using BNF, and how to do something with that BNF. Here's
an example of how to parse and manipulate algebra symbolically with an explicit BNF and an implicit AST as a foundation. This isn't what compilers traditionally do, but the machinery that does is founded deeply in compiler technology.
For a stack-based parser implemented in PHP that uses Djikstra's shunting yard algorithm to convert infix to postfix notation, and with support for functions with varying number of arguments, you can look at the source for the PHPExcel calculation engine
Traditionally formula processors on computers use POSTFIX notation. They use a stack, pop 2 items as operands, pop the third item as the operator, and push the result.
What you want is an INFIX to POSTFIX notation converter which is really quite simple. Once you're in postfix processing is the simplest thing you'll ever do.
If you want to go for an existing solution I can recommend a working, PSR-0 compatible implementation of the shunting yard algorithm: https://github.com/andig/php-shunting-yard/tree/dev.