How is inlining more efficeny than recursive definition? - language-agnostic

My Programming Paradigms textbook, Essential of Programming Languages (3rd ed), Chapter 1 has an exercise:
Exercise 1.12
Eliminate the one call to subst-in-s-exp in subst by
replacing it by its definition and simplifying the resulting
procedure. The result will be a version of subst that does not need
subst-in-s-exp. This technique is called inlining, and is used by
optimizing compilers.
The original code would have two functions: subst and subst-in-sexp which basically substitutes the all occurrences of old symbol with new symbol in the input list.
(define subst
(lambda (new old slist)
(if (null? slist) '()
(cons
(subst-in-s-exp new old (car slist))
(subst new old (cdr slist))))))
(define subst-in-s-exp
(lambda (new old sexp)
(if (symbol? sexp)
(if (eqv? sexp old) new sexp)
(subst new old sexp))))
The answer to this question is to eliminate subst-in-sexp, which becomes this
(define subst
(lambda (slist old new)
(cond
[ (null? slist) '()]
[ (eqv? (car slist) old) (cons new (subst (cdr slist) old new))]
[ else (cons (car slist) (subst (cdr slist) old new))])))
Why is in-lining better besides it may be a lot shorter (less space)? Does the size of the recursion changes? In other words, does this inlining creates fewer stack elements?
Moreover, how can I use this idea to make my C++, Python, and Java code faster? Can I extend this idea easily? Thanks.
I tagged this in Scheme (actually, Racket) because this is the choice of language in the book.

Inlining is a pretty standard compiler optimization, but as AoeAoe said, it's generally better to write your code so that it's readable and let the compiler do all the inlining for you.
The immediate benefit of inlining is that it eliminates branches in your code. It means your CPU can keep reading straight down your code rather than having to spend a couple of clock cycles finding the next section of code to execute.
However, inlining has some other benefits as well. You end up with bigger chunks of code, which means the compiler has more code and data to play with. It might be able to stick more things in registers, or do constant folding to eliminate more computations. The compiler can also do a better job of instruction scheduling, because it has more instructions to move around.
The drawback is that inlining increases your resulting code size. Especially with modern CPUs running so much faster than memory, it can often be better to inline less code in order to keep all of a hot section of code in L1 cache.

Adding just a bit to Eric's answer, inlining can be a big win in dynamically typed languages, where inlining a call may make it possible for a compiler to specialize the implementation to the kinds of data that appear.
For instance: suppose I have a function called f:
(define (f x) (+ (* x x) 3.0))
... and I call it inline:
(+ (f 3.2) (g 3.9))
In this case, the inlined code makes it clear that the multiplication and addition can't be called with non-numbers, so this error check can be elided.

In answer to: "How can I use this idea to make my C++, Python, and Java code faster?"
I don't think that the Python runtime does things like automatically inlining small methods. (If this is wrong, someone please correct me!) So in Python, if you have a piece of code which is very performance-sensitive, perhaps running tens of thousands or millions of times in an inner loop, you could try inlining manually. Only do this if the code is really a bottleneck, and it really needs to be as fast as possible, and always measure to see if such optimizations are actually helping anything. (If you try inlining something and it doesn't help, it's better to undo the optimization, because inlining will generally make your code harder to read.)
In Java and C++, any good compiler will inline small methods for you. The thing which you can (sometimes) do, is help the compiler see that a method can be inlined. If the exact method which is called depends on the run-time type of an object (as when using virtual methods in C++), the compiler will not be able to inline the call. static methods in Java can easily be inlined, and declaring methods as final (when it makes sense to do so) may also make it possible for the compiler to inline.
If you learn more in the future about compilers and how they work, you will better be able to see how to write performance-sensitive code in a way that the compiler is able to optimize for you.

Related

Are there any macros that cannot be expressed as a function?

Are there any macros that:
Cannot be expressed as a equivalent function, or:
Are difficult to express as a equivalent function, or:
Are significantly worse in terms of performance than equivalent function?
Can you give an example of such a macro (and function)?
My question refers specifically to Lisp's macros and functions, but you may treat this question more generally. I'm particularly interested in recursive macros.
Edit:
I should've been more specific. When I asked the above question, I kept in mind specific context of my Lisp project, which is kind of mathematical symbolic programming calculator.
I was wondering if there is good reason to use macro for any of the mathematical operations, like:
analytical integration,
analytical differentiation,
polynomial operations,
computing Taylor series for given function,
computing trigonometric identities,
and so on...
For some reason I need to use a few recursive macros for this project. So, if I can reformulate my question:
Can you give examples of smart use of [recursive] macros in a symbolic calculation?
Are there any macros that:
Cannot be expressed as a equivalent function, or:
Are difficult to express as a equivalent function, or:
Are significantly worse in terms of performance than equivalent function?
The answer is never quite this simple. It's typically "yes and no". As I see it, there are two big advantages of macros: syntactic sugar delayed evaluation. For instance, macros like with-open-file, which lets you write:
(with-open-file (var "some-filename")
; operations with var
)
is pretty much just syntactic sugar for
(let ((x (open ...)))
(unwind-protect
(funcall (lambda (var)
; operations with var
)
x)
; cleanup forms
))
That's sort of a mix of delayed evaluation and syntactic sugar, depending on how you look at it. It's delayed evaluation, in that all the operations with var are wrapped up into a lambda function. It's also syntactic sugar, because you could obviously abstract the above into:
(defun call-with-open-file (open-args function)
(let ((x (apply 'open open-args)))
(unwind-protect (funcall function x)
; cleanup forms
)))
and then with-open-file is just syntactic sugar:
(defmacro with-open-file ((var &rest open-args) &body body)
`(call-with-open-file (list ,#open-args)
(lambda (,var) ,#body)))
That's a typical case that exhibits both delayed evaluation (of the body forms) and syntactic sugar around a functional interface. You can typically always do that. E.g., with if, you could write a functional interface:
(defun %if (condition then &optional (else (constantly nil)))
`(funcall (cond (condition then) (t else))))
Then if can be implemented as a macro:
(defmacro if (condition then &optional else)
`(%if condition (lambda () ,then) (lambda () ,else)))
if and other conditional forms are a bit unique, in this sense, though, because the implementation ultimately has to provide you some conditional operation. That operator typically isn't a macro, though, but a special form.
What about other special macros like loop, that define domain specific languages? You can do those too, but you pretty much would just end up having the function accept the "body" of the macro version and interpret it at runtime. E.g., you could do
(defun %loop (&rest loop-body)
; interpret body
)
but that's obviously going to be a big performance hit.
So, I'd posit that there are no macros that don't have a semantic equivalent, but these will require somewhat different arguments. Some of those semantically equivalent functions will be difficult to expression, and some of them (e.g., when passing anonymous functions around) will certainly have significantly worse performance.
I think your question is not well posed since it is based on the ambiguous use of the term “equivalent”. At a first sight, it seems that you intend “equivalent” as: “calculating the same value” (and this is confirmed by your third question about performance).
But they are not equivalent at all, because functions produce (or calculate) values, while macros produce (or calculate) programs! (and when you understand this, you will understand that a macro actually is a function, a function from s-expressions (the “quoted arguments”) to s-expressions).
So, I think that the answer to your questions should be given in these terms:
1) If you stretch the meaning of equivalence as “when the result of a macro, (i.e. a program), is further evaluated by the system”, than an answer like that of Joshua Taylor is to be taken into consideration;
2) If you are asking about macros and functions per se, they are not equivalent at all.
And concerning their use in the task you are addressing: macros can be really useful in defining particular control structures, or specialized ways of performing computation, like in DSL (Domain Specific Languages), but my advice is to use them only when you think that your problem could be solved in an easier way by adding to the usual tools (i.e. predefined functions, special forms and macros) new powerful tools, and when you have experience in writing complex macros (to practice this, see for instance the book of Paul Graham On Lisp).
Are there any macros that cannot be expressed as a function?
Yes; all macros in any expertly written Lisp program!
There are sometimes macros that can be replaced by functions, if you radically change or augment underlying language implementation.
For instance, a macro might be simulating something that otherwise requires continuations in a Lisp dialect that doesn't have them. Or it might be doing something that looks like non-strict evaluation, over a strict language. Something can can be done just with functions in a call-by-name language can be expressed with macros over pure call-by-value.
Macros go away when they are expanded; all that is left is special operators and functions. What functions can or cannot do depends on the available special operators. For instance without a operators to capture a continuation, a function cannot abandon its evaluation in such a way that it can be later restarted.
Therefore it is a false dichotomy to think about the power as being divided between macros and functions, while ignoring special operators.
A given problem can be solved by a combination of functions and special operators. If it requires certain special operators, then we cannot say that
the problem is solved by functions alone.
Macros can be used in such a way that they hide the use of special operators. Macros which conceal the essential use of special operators cannot be rewritten as functions.
For instance, a macro that provides syntactic sugar over a lambda operator cannot be written as a function. The macro's essential functionality depends on the fact that it expands to a lambda operator which captures a closure in the original lexical environment where the macro call occurs.
When Lisp language designers extend a dialect with new core functionality, they do so by adding new special forms. Macros are added at the same time to make the forms easier to use. For instance, I recently added delimited continuations to a Lisp dialect. The underlying API is not the easiest thing to use for certain simple tasks, so I also provided macros which provide an easy-to-use "generator" abstraction. Needless to say, these generator macros cannot be implemented with functions. Not only that, those macros cannot be implemented at all without the delimited continuation support; all they do is write code that depends on using these new special forms, and those special forms are implemented by hacks deep the language core which do nasty things like copying sections of the run-time stack to the heap, and back to a different area of the stack.
Now in a purely interpretive Lisp that runs programs by evaluating raw source code, you can have a form of function which is as powerful as a macro (in fact, more so). This is a function which, when it is called at run-time, receives its argument expressions unevaluated, together with the run-time environment needed to evaluate them. Essentially, such a function, though written by the user, acts as an "interpreter plugin", called upon to interpret code in an arbitrary way. In historic Lisp terminology, this kind of function is called a "fexpr".
The relationship between macros and fexprs is that macros are to fexprs what compilers are to interpreters. If you have a dialect with fexprs, then there is no reason to use macros if the only requirement is to support some syntax with some semantics, without caring about performance. A macro may be able to do the same thing by compiling to a more efficient translation. Even though the dialect is purely interpretive, it's nevertheless faster to have the interpreter run some macro-generated code, than for the interpreter to interpret a function, which itself interprets code.
But, of course, though fexprs are functions, they are not ordinary functions; ordinary functions receive evaluated arguments and no environment. So that just changes the question to: are there essential fexprs that cannot be replaced by ordinary functions?
The answer is: yes, any fexprs in an expertly written program ...
Any Lisp macro which does not evaluate ("twice", since it is a macro) its arguments cannot be expressed as a function, since function application is done on evaluated arguments. For example you could define a macro my-if which behaves exactly like if (and if cannot be a function)
C.Queinnec's book Lisp In Small Pieces explains that in great detail (and has several chapters about macros). I strongly recommend to read it (since answering your too broad question may require an entire book, not a paragraph).
If a macro expands one of its arguments several times, it might be slower than the equivalent function (because some sub-computations could be done twice if expanded twice).
(of course, the answer to all your questions can be yes; I leave up to you how to find some examples).
PS. BTW, this is even true in C....

LISP - what does CONS need to work?

I had this question in an exam, how would you solve it?
CONS is a fundamental Common Lisp function. Which functionality must the Common Lisp environment provide to make it work? What would happen to this code without it?
(defun test(n l1 l2)
(when (plusp n)
(append l1 l2)
(something (1- n) l1 l2)))
prompt> (test fourtytwo '(4) '(2))
From which perspective?
From a language implementer you need memory and a data type that takes two pointers and perhaps flags for type and gc unless it's embedded in the pointer.
For a developer it needs two arguments holding any data. Both the reader and append uses it so without you won't have cons cells and therefor not lists either.

Clojure - test for equality of function expression?

Suppose I have the following clojure functions:
(defn a [x] (* x x))
(def b (fn [x] (* x x)))
(def c (eval (read-string "(defn d [x] (* x x))")))
Is there a way to test for the equality of the function expression - some equivalent of
(eqls a b)
returns true?
It depends on precisely what you mean by "equality of the function expression".
These functions are going to end up as bytecode, so I could for example dump the bytecode corresponding to each function to a byte[] and then compare the two bytecode arrays.
However, there are many different ways of writing semantically equivalent methods, that wouldn't have the same representation in bytecode.
In general, it's impossible to tell what a piece of code does without running it. So it's impossible to tell whether two bits of code are equivalent without running both of them, on all possible inputs.
This is at least as bad, computationally speaking, as the halting problem, and possibly worse.
The halting problem is undecidable as it is, so the general-case answer here is definitely no (and not just for Clojure but for every programming language).
I agree with the above answers in regards to Clojure not having a built in ability to determine the equivalence of two functions and that it has been proven that you can not test programs functionally (also known as black box testing) to determine equality due to the halting problem (unless the input set is finite and defined).
I would like to point out that it is possible to algebraically determine the equivalence of two functions, even if they have different forms (different byte code).
The method for proving the equivalence algebraically was developed in the 1930's by Alonzo Church and is know as beta reduction in Lambda Calculus. This method is certainly applicable to the simple forms in your question (which would also yield the same byte code) and also for more complex forms that would yield different byte codes.
I cannot add to the excellent answers by others, but would like to offer another viewpoint that helped me. If you are e.g. testing that the correct function is returned from your own function, instead of comparing the function object you might get away with just returning the function as a 'symbol.
I know this probably is not what the author asked for but for simple cases it might do.

Can you implement any pure LISP function using the ten primitives? (ie no type predicates)

This site makes the following claim:
http://hyperpolyglot.wikidot.com/lisp#ten-primitives
McCarthy introduced the ten primitives of lisp in 1960. All other pure lisp
functions (i.e. all functions which don't do I/O or interact with the environment)
can be implemented with these primitives. Thus, when implementing or porting lisp,
these are the only functions which need to be implemented in a lower language. The
way the non-primitives of lisp can be constructed from primitives is analogous to
the way theorems can be proven from axioms in mathematics.
The primitives are: atom, quote, eq, car, cdr, cons, cond, lambda, label, apply.
My question is - can you really do this without type predicates such as numberp? Surely there is a point when writing a higher level function that you need to do a numeric operation - which the primitives above don't allow for.
Some numbers can be represented with just those primitives, it's just rather inconvenient and difficult the conceptualize the first time you see it.
Similar to how the natural numbers are represented with sets increasing in size, they can be simulated in Lisp as nested cons cells.
Zero would be the empty list, or (). One would be the singleton cons cell, or (() . ()). Two would be one plus one, or the successor of one, where we define the successor of x to be (cons () x) , which is of course (() . (() . ())). If you accept the Infinity Axiom (and a few more, but mostly the Infinity Axiom for our purposes so far), and ignore the memory limitations of real computers, this can accurately represent all the natural numbers.
It's easy enough to extend this to represent all the integers and then the rationals [1], but representing the reals in this notation would be (I think) impossible. Fortunately, this doesn't dampen our fun, as we can't represent the all the reals on our computers anyway; we make do with floats and doubles. So our representation is just as powerful.
In a way, 1 is just syntactic sugar for (() . ()).
Hurray for set theory! Hurray for Lisp!
EDIT Ah, for further clarification, let me address your question of type predicates, though at this point it could be clear. Since your numbers have a distinct form, you can test these linked lists with a function of your own creation that tests for this particular structure. My Scheme isn't good enough anymore to write it in Scheme, but I can attempt to in Clojure.
Regardless, you may be saying that it could give you false positives: perhaps you're simply trying to represent sets and you end up having the same structure as a number in this system. To that I reply: well, in that case, you do in fact have a number.
So you can see, we've got a pretty decent representation of numbers here, aside from how much memory they take up (not our concern) and how ugly they look when printed at the REPL (also, not our concern) and how inefficient it will be to operate on them (e.g. we have to define our addition etc. in terms of list operations: slow and a bit complicated.) But none of these are out concern: the speed really should and could depend on the implementation details, not what you're doing this the language.
So here, in Clojure (but using only things we basically have access to in our simple Lisp, is numberp. (I hope; feel free to correct me, I'm groggy as hell etc. excuses etc.)
(defn numberp
[x]
(cond
(nil? x) true
(and (coll? x) (nil? (first x))) (numberp (second x))
:else false))
[1] For integers, represent them as cons cells of the naturals. Let the first element in the cons cell be the "negative" portion of the integer, and the second element be the "positive" portion of the integer. In this way, -2 can be represented as (2, 0) or (4, 2) or (5, 3) etc. For the rationals, let them be represented as cons cells of the integers: e.g. (-2, 3) etc. This does give us the possibility of having the same data structure representing the same number: however, this can be remedied by writing functions that test two numbers to see if they're equivalent: we'd define these functions in terms of the already-existing equivalence relations set theory offers us. Fun stuff :)

Can every recursion be converted into iteration?

A reddit thread brought up an apparently interesting question:
Tail recursive functions can trivially be converted into iterative functions. Other ones, can be transformed by using an explicit stack. Can every recursion be transformed into iteration?
The (counter?)example in the post is the pair:
(define (num-ways x y)
(case ((= x 0) 1)
((= y 0) 1)
(num-ways2 x y) ))
(define (num-ways2 x y)
(+ (num-ways (- x 1) y)
(num-ways x (- y 1))
Can you always turn a recursive function into an iterative one? Yes, absolutely, and the Church-Turing thesis proves it if memory serves. In lay terms, it states that what is computable by recursive functions is computable by an iterative model (such as the Turing machine) and vice versa. The thesis does not tell you precisely how to do the conversion, but it does say that it's definitely possible.
In many cases, converting a recursive function is easy. Knuth offers several techniques in "The Art of Computer Programming". And often, a thing computed recursively can be computed by a completely different approach in less time and space. The classic example of this is Fibonacci numbers or sequences thereof. You've surely met this problem in your degree plan.
On the flip side of this coin, we can certainly imagine a programming system so advanced as to treat a recursive definition of a formula as an invitation to memoize prior results, thus offering the speed benefit without the hassle of telling the computer exactly which steps to follow in the computation of a formula with a recursive definition. Dijkstra almost certainly did imagine such a system. He spent a long time trying to separate the implementation from the semantics of a programming language. Then again, his non-deterministic and multiprocessing programming languages are in a league above the practicing professional programmer.
In the final analysis, many functions are just plain easier to understand, read, and write in recursive form. Unless there's a compelling reason, you probably shouldn't (manually) convert these functions to an explicitly iterative algorithm. Your computer will handle that job correctly.
I can see one compelling reason. Suppose you've a prototype system in a super-high level language like [donning asbestos underwear] Scheme, Lisp, Haskell, OCaml, Perl, or Pascal. Suppose conditions are such that you need an implementation in C or Java. (Perhaps it's politics.) Then you could certainly have some functions written recursively but which, translated literally, would explode your runtime system. For example, infinite tail recursion is possible in Scheme, but the same idiom causes a problem for existing C environments. Another example is the use of lexically nested functions and static scope, which Pascal supports but C doesn't.
In these circumstances, you might try to overcome political resistance to the original language. You might find yourself reimplementing Lisp badly, as in Greenspun's (tongue-in-cheek) tenth law. Or you might just find a completely different approach to solution. But in any event, there is surely a way.
Is it always possible to write a non-recursive form for every recursive function?
Yes. A simple formal proof is to show that both µ recursion and a non-recursive calculus such as GOTO are both Turing complete. Since all Turing complete calculi are strictly equivalent in their expressive power, all recursive functions can be implemented by the non-recursive Turing-complete calculus.
Unfortunately, I’m unable to find a good, formal definition of GOTO online so here’s one:
A GOTO program is a sequence of commands P executed on a register machine such that P is one of the following:
HALT, which halts execution
r = r + 1 where r is any register
r = r – 1 where r is any register
GOTO x where x is a label
IF r ≠ 0 GOTO x where r is any register and x is a label
A label, followed by any of the above commands.
However, the conversions between recursive and non-recursive functions isn’t always trivial (except by mindless manual re-implementation of the call stack).
For further information see this answer.
Recursion is implemented as stacks or similar constructs in the actual interpreters or compilers. So you certainly can convert a recursive function to an iterative counterpart because that's how it's always done (if automatically). You'll just be duplicating the compiler's work in an ad-hoc and probably in a very ugly and inefficient manner.
Basically yes, in essence what you end up having to do is replace method calls (which implicitly push state onto the stack) into explicit stack pushes to remember where the 'previous call' had gotten up to, and then execute the 'called method' instead.
I'd imagine that the combination of a loop, a stack and a state-machine could be used for all scenarios by basically simulating the method calls. Whether or not this is going to be 'better' (either faster, or more efficient in some sense) is not really possible to say in general.
Recursive function execution flow can be represented as a tree.
The same logic can be done by a loop, which uses a data-structure to traverse that tree.
Depth-first traversal can be done using a stack, breadth-first traversal can be done using a queue.
So, the answer is: yes. Why: https://stackoverflow.com/a/531721/2128327.
Can any recursion be done in a single loop? Yes, because
a Turing machine does everything it does by executing a single loop:
fetch an instruction,
evaluate it,
goto 1.
Yes, using explicitly a stack (but recursion is far more pleasant to read, IMHO).
Yes, it's always possible to write a non-recursive version. The trivial solution is to use a stack data structure and simulate the recursive execution.
In principle it is always possible to remove recursion and replace it with iteration in a language that has infinite state both for data structures and for the call stack. This is a basic consequence of the Church-Turing thesis.
Given an actual programming language, the answer is not as obvious. The problem is that it is quite possible to have a language where the amount of memory that can be allocated in the program is limited but where the amount of call stack that can be used is unbounded (32-bit C where the address of stack variables is not accessible). In this case, recursion is more powerful simply because it has more memory it can use; there is not enough explicitly allocatable memory to emulate the call stack. For a detailed discussion on this, see this discussion.
All computable functions can be computed by Turing Machines and hence the recursive systems and Turing machines (iterative systems) are equivalent.
Sometimes replacing recursion is much easier than that. Recursion used to be the fashionable thing taught in CS in the 1990's, and so a lot of average developers from that time figured if you solved something with recursion, it was a better solution. So they would use recursion instead of looping backwards to reverse order, or silly things like that. So sometimes removing recursion is a simple "duh, that was obvious" type of exercise.
This is less of a problem now, as the fashion has shifted towards other technologies.
Recursion is nothing just calling the same function on the stack and once function dies out it is removed from the stack. So one can always use an explicit stack to manage this calling of the same operation using iteration.
So, yes all-recursive code can be converted to iteration.
Removing recursion is a complex problem and is feasible under well defined circumstances.
The below cases are among the easy:
tail recursion
direct linear recursion
Appart from the explicit stack, another pattern for converting recursion into iteration is with the use of a trampoline.
Here, the functions either return the final result, or a closure of the function call that it would otherwise have performed. Then, the initiating (trampolining) function keep invoking the closures returned until the final result is reached.
This approach works for mutually recursive functions, but I'm afraid it only works for tail-calls.
http://en.wikipedia.org/wiki/Trampoline_(computers)
I'd say yes - a function call is nothing but a goto and a stack operation (roughly speaking). All you need to do is imitate the stack that's built while invoking functions and do something similar as a goto (you may imitate gotos with languages that don't explicitly have this keyword too).
Have a look at the following entries on wikipedia, you can use them as a starting point to find a complete answer to your question.
Recursion in computer science
Recurrence relation
Follows a paragraph that may give you some hint on where to start:
Solving a recurrence relation means obtaining a closed-form solution: a non-recursive function of n.
Also have a look at the last paragraph of this entry.
It is possible to convert any recursive algorithm to a non-recursive
one, but often the logic is much more complex and doing so requires
the use of a stack. In fact, recursion itself uses a stack: the
function stack.
More Details: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Functions
tazzego, recursion means that a function will call itself whether you like it or not. When people are talking about whether or not things can be done without recursion, they mean this and you cannot say "no, that is not true, because I do not agree with the definition of recursion" as a valid statement.
With that in mind, just about everything else you say is nonsense. The only other thing that you say that is not nonsense is the idea that you cannot imagine programming without a callstack. That is something that had been done for decades until using a callstack became popular. Old versions of FORTRAN lacked a callstack and they worked just fine.
By the way, there exist Turing-complete languages that only implement recursion (e.g. SML) as a means of looping. There also exist Turing-complete languages that only implement iteration as a means of looping (e.g. FORTRAN IV). The Church-Turing thesis proves that anything possible in a recursion-only languages can be done in a non-recursive language and vica-versa by the fact that they both have the property of turing-completeness.
Here is an iterative algorithm:
def howmany(x,y)
a = {}
for n in (0..x+y)
for m in (0..n)
a[[m,n-m]] = if m==0 or n-m==0 then 1 else a[[m-1,n-m]] + a[[m,n-m-1]] end
end
end
return a[[x,y]]
end