Missing arguments in a nested function - function

I follow a python course on finance about portfolio theory. I have to create a function with a nested function in it.
My problem is I have a error message of "neg_sharpe_ratio() missing 2 required positional arguments: 'er' and 'cov'" whereas to my mind 'er' and 'cov' are already defined in my function msr below. So I understand how they are missing.
from scipy.optimize import minimize
def msr(riskfree_rate, er, cov):
n= er.shape[0]
init_guess= np.repeat(1/n, n)
bounds=((0.00, 1.0),)*n
weights_sum_to_1 = {
'type' :'eq' , #
'fun' : lambda weights: np.sum(weights) - 1 ##
}
def neg_sharpe_ratio(weights,riskfree_rate, er, cov):
r = erk.portfolio_return(weights, er)
vol = erk.portfolio_vol(weights,cov)
return -(r-riskfree_rate)/vol
results = minimize( neg_sharpe_ratio, init_guess,
args=(cov,), method="SLSQP",
options={'disp': False},
constraints=( weights_sum_to_1),
bounds=bounds
)
return results.x
TypeError: neg_sharpe_ratio() missing 2 required positional arguments: 'er' and 'cov'

The function neg_sharpe_ratio is able to reference any of the variables passed in and made by the function msr without needing those same variables passed into it itself. Therefore you should be able to remove the paramters riskfree_rate, er, and cov from the neq_sharpe_ratio function definition and have it work, as those variables are passed into its parent function, leaving you with:
def neg_sharpe_ratio(weights):

For those who might be interested, I find my mistake..
Indeed, I forgot to define correctly the arguments of my function neg_share_ratio in my function minimize.
Here is the code amended:
from scipy.optimize import minimize
def msr(riskfree_rate, er, cov):
n= er.shape[0]
init_guess= np.repeat(1/n, n)
bounds=((0.00, 1.0),)*n
weights_sum_to_1 = {
'type' :'eq' , #
'fun' : lambda weights: np.sum(weights) - 1 ##
}
def neg_sharpe_ratio(weights,riskfree_rate, er, cov):
r = erk.portfolio_return(weights, er)
vol = erk.portfolio_vol(weights,cov)
return -(r-riskfree_rate)/vol
results = minimize( neg_sharpe_ratio, init_guess,
args=(weights,riskfree_rate,er,cov), method="SLSQP",
options={'disp': False},
constraints=( weights_sum_to_1),
bounds=bounds
)
return results.x code here

Related

In Elixir, How can I extract a lambda to a named function when the lambda is in a closure?

I have the following closure:
def get!(Item, id) do
Enum.find(
#items,
fn(item) -> item.id == id end
)
end
As I believe this looks ugly and difficult to read, I'd like to give this a name, like:
def get!(Item, id) do
defp has_target_id?(item), do: item.id = id
Enum.find(#items, has_target_id?/1)
end
Unfortunately, this results in:
== Compilation error in file lib/auction/fake_repo.ex ==
** (ArgumentError) cannot invoke defp/2 inside function/macro
(elixir) lib/kernel.ex:5238: Kernel.assert_no_function_scope/3
(elixir) lib/kernel.ex:4155: Kernel.define/4
(elixir) expanding macro: Kernel.defp/2
lib/auction/fake_repo.ex:28: Auction.FakeRepo.get!/2
Assuming it is possible, what is the correct way to do this?
The code you posted has an enormous amount of syntax errors/glitches. I would suggest you start with getting accustomed to the syntax, rather than trying to make Elixir better by inventing the things that nobody uses.
Here is the correct version that does what you wanted. The task might be accomplished with an anonymous function, although I hardly see a reason to make a perfectly looking idiomatic Elixir look ugly.
defmodule Foo do
#items [%{id: 1}, %{id: 2}, %{id: 3}]
def get!(id) do
has_target_id? = fn item -> item.id == id end
Enum.find(#items, has_target_id?)
end
end
Foo.get! 1
#⇒ %{id: 1}
Foo.get! 4
#⇒ nil
You can do this:
def get!(Item, id) do
Enum.find(
#items,
&compare_ids(&1, id)
)
end
defp compare_ids(%Item{}=item, id) do
item.id == id
end
But, that's equivalent to:
Enum.find(
#items,
fn item -> compare_ids(item, id) end
)
and may not pass your looks ugly and difficult to read test.
I was somehow under the impression Elixir supports nested functions?
Easy enough to test:
defmodule A do
def go do
def greet do
IO.puts "hello"
end
greet()
end
end
Same error:
$ iex a.ex
Erlang/OTP 20 [erts-9.2] [source] [64-bit] [smp:4:4] [ds:4:4:10] [async-threads:10] [hipe] [kernel-poll:false]
** (ArgumentError) cannot invoke def/2 inside function/macro
(elixir) lib/kernel.ex:5150: Kernel.assert_no_function_scope/3
(elixir) lib/kernel.ex:3906: Kernel.define/4
(elixir) expanding macro: Kernel.def/2
a.ex:3: A.go/0
wouldn't:
defp compare_ids(item, id), do: item.id == id
be enough? Is there any advantage to including %Item{} or making
separate functions for returning both true and false conditions?
What you gain by specifying the first parameter as:
func(%Item{} = item, target_id)
is that only an Item struct will match the first parameter. Here is an example:
defmodule Item do
defstruct [:id, :name, :description]
end
defmodule Dog do
defstruct [:id, :name, :owner]
end
defmodule A do
def go(%Item{} = item), do: IO.inspect(item.id, label: "id: ")
end
In iex:
iex(1)> item = %Item{id: 1, name: "book", description: "old"}
%Item{description: "old", id: 1, name: "book"}
iex(2)> dog = %Dog{id: 1, name: "fido", owner: "joe"}
%Dog{id: 1, name: "fido", owner: "joe"}
iex(3)> A.go item
id: : 1
1
iex(4)> A.go dog
** (FunctionClauseError) no function clause matching in A.go/1
The following arguments were given to A.go/1:
# 1
%Dog{id: 1, name: "fido", owner: "joe"}
a.ex:10: A.go/1
iex(4)>
You get a function clause error if you call the function with a non-Item, and the earlier an error occurs, the better, because it makes debugging easier.
Of course, by preventing the function from accepting other structs, you make the function less general--but because it's a private function, you can't call it from outside the module anyway. On the other hand, if you wanted to call the function on both Dog and Item structs, then you could simply specify the first parameter as:
|
V
func(%{}=thing, target_id)
then both an Item and a Dog would match--but not non-maps.
What you gain by specifying the first parameter as:
|
V
func(%Item{id: id}, target_id)
is that you let erlang's pattern matching engine extract the data you need, rather than calling item.id as you would need to do with this definition:
func(%Item{}=item, target_id)
In erlang, pattern matching in a parameter list is the most efficient/convenient/stylish way to write functions. You use pattern matching to extract the data that you want to use in the function body.
Going even further, if you write the function definition like this:
same variable name
| |
V V
func(%Item{id: target_id}, target_id)
then erlang's pattern matching engine not only extracts the value for the id field from the Item struct, but also checks that the value is equal to the value of the target_id variable in the 2nd argument.
Defining multiple function clauses is a common idiom in erlang, and it is considered good style because it takes advantage of pattern matching rather than logic inside the function body. Here's an erlang example:
get_evens(List) ->
get_evens(List, []).
get_evens([Head|Tail], Results) when Head rem 2 == 0 ->
get_evens(Tail, [Head|Results]);
get_evens([Head|Tail], Results) when Head rem 2 =/= 0 ->
get_evens(Tail, Results);
get_evens([], Results) ->
lists:reverse(Results).

Why does Julia Documenter require qualifying functions in doc-tests?

My doc-tests in Julia require a qualification with the module name, despite calling using my_module everywhere. If I do not qualify the functions, I get
ERROR: UndefVarError: add not defined
Here is the setup that gives this error. The directory structure with tree is:
.
|____docs
| |____make.jl
| |____src
| | |____index.md
|____src
| |____my_module.jl
The file docs/make.jl is:
using Documenter, my_module
makedocs(
modules = [my_module],
format = :html,
sitename = "my_module.jl",
doctest = true
)
The file docs/src/index.md is:
# Documentation
```#meta
CurrentModule = my_module
DocTestSetup = quote
using my_module
end
```
```#autodocs
Modules = [my_module]
```
The file src/my_module.jl is:
module my_module
"""
add(x, y)
Dummy function
# Examples
```jldoctest
julia> add(1, 2)
3
```
"""
function add(x::Number, y::Number)
return x + y
end
end
If I qualify the doc-test in the src/my_module.jl with my_module.add(1,2), then it works fine.
How can I avoid qualifying function names in doc-tests?
Use a setup name block
This is untested, but something like this should work:
module my_module
"""
add(x, y)
Dummy function
# Examples
```#setup abc
import my_module: add
```
```jldoctest abc
julia> add(1, 2)
3
```
"""
function add(x::Number, y::Number)
return x + y
end
end
Following the comments in this thread, the problem is that the add function is not exported, so it is not brought into scope with using. You can add this line near the top of src/my_module.jl, after the module declaration:
export add
And then the doc-testing works.

Problems with GUI, unable to use handles to store variables

I am creating a GUI where a user inputs a value and when he presses a pushbutton it runs an external function and displays error messages. I am having trouble with inserting the variable successfully in the GUI coding. I am confused as to where to insert my variable. I have tried handles but unfortunately its not working.
% --- Executes just before Stallfunction is made visible.
function Stallfunction_OpeningFcn(hObject, ~, handles, varargin)
% This function has no output args, see OutputFcn.
% hObject handle to figure
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
% varargin command line arguments to Stallfunction (see VARARGIN)
% Choose default command line output for Stallfunction
handles.user_entry = user_entry;
% Update handles structure
guidata(hObject, handles);
% UIWAIT makes Stallfunction wait for user response (see UIRESUME)
% uiwait(handles.figure1);
I have inserted the variable in the above code, which is 'user_entry' is that correct?
user_entry is not assigned a value in your function. If you launch your GUI by passing a value for user_entry like this:
Stallfunction(user_entry)
then the first lines of your code in the openingFcn should be:
if ~isempty(varargin)
user_entry = varargin{1};
else
error('please start the GUI with an input value')
end
After this, you can assign user_entry to the handles structure as you're doing already.
Try this:
function num = get_num()
fig = figure('Units', 'characters', ...
'Position', [70 20 30 5], ...
'CloseRequestFcn', #close_Callback);
edit_num = uicontrol(...
'Parent', fig, ...
'Style', 'edit', ...
'Units', 'characters', ...
'Position', [1 1 10 3], ...
'HorizontalAlignment', 'left', ...
'String', 'init', ...
'Callback', #edit_num_Callback);
button_finish = uicontrol( ...
'Parent', fig, ...
'Tag', 'button_finish', ...
'Style', 'pushbutton', ...
'Units', 'characters', ...
'Position', [15 1 10 3], ...
'String', 'Finish', ...
'Callback', #button_finish_Callback);
% Nested functions
function edit_num_Callback(hObject,eventdata)
disp('this is a callback for edit box');
end
function button_finish_Callback(hObject,eventdata)
% Exit
close(fig);
end
function close_Callback(hObject,eventdata)
num_prelim = str2num(get(edit_num,'string'));
if(isempty(num_prelim))
errordlg('Must be a number.','Error','modal');
return;
end
num = num_prelim;
delete(fig);
end
waitfor(fig);
end
See if you can mess with this and get what you want. Also, learn to use nested functions and how callbacks work in matlab. Save this as a function file and then call "num = getnum;"

How to profile methods in Scala?

What is a standard way of profiling Scala method calls?
What I need are hooks around a method, using which I can use to start and stop Timers.
In Java I use aspect programming, aspectJ, to define the methods to be profiled and inject bytecode to achieve the same.
Is there a more natural way in Scala, where I can define a bunch of functions to be called before and after a function without losing any static typing in the process?
Do you want to do this without changing the code that you want to measure timings for? If you don't mind changing the code, then you could do something like this:
def time[R](block: => R): R = {
val t0 = System.nanoTime()
val result = block // call-by-name
val t1 = System.nanoTime()
println("Elapsed time: " + (t1 - t0) + "ns")
result
}
// Now wrap your method calls, for example change this...
val result = 1 to 1000 sum
// ... into this
val result = time { 1 to 1000 sum }
In addition to Jesper's answer, you can automatically wrap method invocations in the REPL:
scala> def time[R](block: => R): R = {
| val t0 = System.nanoTime()
| val result = block
| println("Elapsed time: " + (System.nanoTime - t0) + "ns")
| result
| }
time: [R](block: => R)R
Now - let's wrap anything in this
scala> :wrap time
wrap: no such command. Type :help for help.
OK - we need to be in power mode
scala> :power
** Power User mode enabled - BEEP BOOP SPIZ **
** :phase has been set to 'typer'. **
** scala.tools.nsc._ has been imported **
** global._ and definitions._ also imported **
** Try :help, vals.<tab>, power.<tab> **
Wrap away
scala> :wrap time
Set wrapper to 'time'
scala> BigDecimal("1.456")
Elapsed time: 950874ns
Elapsed time: 870589ns
Elapsed time: 902654ns
Elapsed time: 898372ns
Elapsed time: 1690250ns
res0: scala.math.BigDecimal = 1.456
I have no idea why that printed stuff out 5 times
Update as of 2.12.2:
scala> :pa
// Entering paste mode (ctrl-D to finish)
package wrappers { object wrap { def apply[A](a: => A): A = { println("running...") ; a } }}
// Exiting paste mode, now interpreting.
scala> $intp.setExecutionWrapper("wrappers.wrap")
scala> 42
running...
res2: Int = 42
This what I use:
import System.nanoTime
def profile[R](code: => R, t: Long = nanoTime) = (code, nanoTime - t)
// usage:
val (result, time) = profile {
/* block of code to be profiled*/
}
val (result2, time2) = profile methodToBeProfiled(foo)
There are three benchmarking libraries for Scala that you can avail of.
Since the URLs on the linked site are likely to change, I am pasting the relevant content below.
SPerformance - Performance Testing framework aimed at automagically comparing performance tests and working inside Simple Build Tool.
scala-benchmarking-template - SBT template project for creating Scala (micro-)benchmarks based on Caliper.
Metrics - Capturing JVM- and application-level metrics. So you know what's going on
testing.Benchmark might be useful.
scala> def testMethod {Thread.sleep(100)}
testMethod: Unit
scala> object Test extends testing.Benchmark {
| def run = testMethod
| }
defined module Test
scala> Test.main(Array("5"))
$line16.$read$$iw$$iw$Test$ 100 100 100 100 100
I use a technique that's easy to move around in code blocks. The crux is that the same exact line starts and ends the timer - so it is really a simple copy and paste. The other nice thing is that you get to define what the timing means to you as a string, all in that same line.
Example usage:
Timelog("timer name/description")
//code to time
Timelog("timer name/description")
The code:
object Timelog {
val timers = scala.collection.mutable.Map.empty[String, Long]
//
// Usage: call once to start the timer, and once to stop it, using the same timer name parameter
//
def timer(timerName:String) = {
if (timers contains timerName) {
val output = s"$timerName took ${(System.nanoTime() - timers(timerName)) / 1000 / 1000} milliseconds"
println(output) // or log, or send off to some performance db for analytics
}
else timers(timerName) = System.nanoTime()
}
Pros:
no need to wrap code as a block or manipulate within lines
can easily move the start and end of the timer among code lines when being exploratory
Cons:
less shiny for utterly functional code
obviously this object leaks map entries if you do not "close" timers,
e.g. if your code doesn't get to the second invocation for a given timer start.
ScalaMeter is a nice library to perform benchmarking in Scala
Below is a simple example
import org.scalameter._
def sumSegment(i: Long, j: Long): Long = (i to j) sum
val (a, b) = (1, 1000000000)
val execution_time = measure { sumSegment(a, b) }
If you execute above code snippet in Scala Worksheet you get the running time in milliseconds
execution_time: org.scalameter.Quantity[Double] = 0.260325 ms
The recommended approach to benchmarking Scala code is via sbt-jmh
"Trust no one, bench everything." - sbt plugin for JMH (Java
Microbenchmark Harness)
This approach is taken by many of the major Scala projects, for example,
Scala programming language itself
Dotty (Scala 3)
cats library for functional programming
Metals language server for IDEs
Simple wrapper timer based on System.nanoTime is not a reliable method of benchmarking:
System.nanoTime is as bad as String.intern now: you can use it,
but use it wisely. The latency, granularity, and scalability effects
introduced by timers may and will affect your measurements if done
without proper rigor. This is one of the many reasons why
System.nanoTime should be abstracted from the users by benchmarking
frameworks
Furthermore, considerations such as JIT warmup, garbage collection, system-wide events, etc. might introduce unpredictability into measurements:
Tons of effects need to be mitigated, including warmup, dead code
elimination, forking, etc. Luckily, JMH already takes care of many
things, and has bindings for both Java and Scala.
Based on Travis Brown's answer here is an example of how to setup JMH benchmark for Scala
Add jmh to project/plugins.sbt
addSbtPlugin("pl.project13.scala" % "sbt-jmh" % "0.3.7")
Enable jmh plugin in build.sbt
enablePlugins(JmhPlugin)
Add to src/main/scala/bench/VectorAppendVsListPreppendAndReverse.scala
package bench
import org.openjdk.jmh.annotations._
#State(Scope.Benchmark)
#BenchmarkMode(Array(Mode.AverageTime))
class VectorAppendVsListPreppendAndReverse {
val size = 1_000_000
val input = 1 to size
#Benchmark def vectorAppend: Vector[Int] =
input.foldLeft(Vector.empty[Int])({ case (acc, next) => acc.appended(next)})
#Benchmark def listPrependAndReverse: List[Int] =
input.foldLeft(List.empty[Int])({ case (acc, next) => acc.prepended(next)}).reverse
}
Execute benchmark with
sbt "jmh:run -i 10 -wi 10 -f 2 -t 1 bench.VectorAppendVsListPreppendAndReverse"
The results are
Benchmark Mode Cnt Score Error Units
VectorAppendVsListPreppendAndReverse.listPrependAndReverse avgt 20 0.024 ± 0.001 s/op
VectorAppendVsListPreppendAndReverse.vectorAppend avgt 20 0.130 ± 0.003 s/op
which seems to indicate prepending to a List and then reversing it at the end is order of magnitude faster than keep appending to a Vector.
I took the solution from Jesper and added some aggregation to it on multiple run of the same code
def time[R](block: => R) = {
def print_result(s: String, ns: Long) = {
val formatter = java.text.NumberFormat.getIntegerInstance
println("%-16s".format(s) + formatter.format(ns) + " ns")
}
var t0 = System.nanoTime()
var result = block // call-by-name
var t1 = System.nanoTime()
print_result("First Run", (t1 - t0))
var lst = for (i <- 1 to 10) yield {
t0 = System.nanoTime()
result = block // call-by-name
t1 = System.nanoTime()
print_result("Run #" + i, (t1 - t0))
(t1 - t0).toLong
}
print_result("Max", lst.max)
print_result("Min", lst.min)
print_result("Avg", (lst.sum / lst.length))
}
Suppose you want to time two functions counter_new and counter_old, the following is the usage:
scala> time {counter_new(lst)}
First Run 2,963,261,456 ns
Run #1 1,486,928,576 ns
Run #2 1,321,499,030 ns
Run #3 1,461,277,950 ns
Run #4 1,299,298,316 ns
Run #5 1,459,163,587 ns
Run #6 1,318,305,378 ns
Run #7 1,473,063,405 ns
Run #8 1,482,330,042 ns
Run #9 1,318,320,459 ns
Run #10 1,453,722,468 ns
Max 1,486,928,576 ns
Min 1,299,298,316 ns
Avg 1,407,390,921 ns
scala> time {counter_old(lst)}
First Run 444,795,051 ns
Run #1 1,455,528,106 ns
Run #2 586,305,699 ns
Run #3 2,085,802,554 ns
Run #4 579,028,408 ns
Run #5 582,701,806 ns
Run #6 403,933,518 ns
Run #7 562,429,973 ns
Run #8 572,927,876 ns
Run #9 570,280,691 ns
Run #10 580,869,246 ns
Max 2,085,802,554 ns
Min 403,933,518 ns
Avg 797,980,787 ns
Hopefully this is helpful
I like the simplicity of #wrick's answer, but also wanted:
the profiler handles looping (for consistency and convenience)
more accurate timing (using nanoTime)
time per iteration (not total time of all iterations)
just return ns/iteration - not a tuple
This is achieved here:
def profile[R] (repeat :Int)(code: => R, t: Long = System.nanoTime) = {
(1 to repeat).foreach(i => code)
(System.nanoTime - t)/repeat
}
For even more accuracy, a simple modification allows a JVM Hotspot warmup loop (not timed) for timing small snippets:
def profile[R] (repeat :Int)(code: => R) = {
(1 to 10000).foreach(i => code) // warmup
val start = System.nanoTime
(1 to repeat).foreach(i => code)
(System.nanoTime - start)/repeat
}
You can use System.currentTimeMillis:
def time[R](block: => R): R = {
val t0 = System.currentTimeMillis()
val result = block // call-by-name
val t1 = System.currentTimeMillis()
println("Elapsed time: " + (t1 - t0) + "ms")
result
}
Usage:
time{
//execute somethings here, like methods, or some codes.
}
nanoTime will show you ns, so it will hard to see. So I suggest that you can use currentTimeMillis instead of it.
While standing on the shoulders of giants...
A solid 3rd-party library would be more ideal, but if you need something quick and std-library based, following variant provides:
Repetitions
Last result wins for multiple repetitions
Total time and average time for multiple repetitions
Removes the need for time/instant provider as a param
.
import scala.concurrent.duration._
import scala.language.{postfixOps, implicitConversions}
package object profile {
def profile[R](code: => R): R = profileR(1)(code)
def profileR[R](repeat: Int)(code: => R): R = {
require(repeat > 0, "Profile: at least 1 repetition required")
val start = Deadline.now
val result = (1 until repeat).foldLeft(code) { (_: R, _: Int) => code }
val end = Deadline.now
val elapsed = ((end - start) / repeat)
if (repeat > 1) {
println(s"Elapsed time: $elapsed averaged over $repeat repetitions; Total elapsed time")
val totalElapsed = (end - start)
println(s"Total elapsed time: $totalElapsed")
}
else println(s"Elapsed time: $elapsed")
result
}
}
Also worth noting you can use the Duration.toCoarsest method to convert to the biggest time unit possible, although I am not sure how friendly this is with minor time difference between runs e.g.
Welcome to Scala version 2.11.7 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_60).
Type in expressions to have them evaluated.
Type :help for more information.
scala> import scala.concurrent.duration._
import scala.concurrent.duration._
scala> import scala.language.{postfixOps, implicitConversions}
import scala.language.{postfixOps, implicitConversions}
scala> 1000.millis
res0: scala.concurrent.duration.FiniteDuration = 1000 milliseconds
scala> 1000.millis.toCoarsest
res1: scala.concurrent.duration.Duration = 1 second
scala> 1001.millis.toCoarsest
res2: scala.concurrent.duration.Duration = 1001 milliseconds
scala>
adding on => method with name & seconds
profile[R](block: => R,methodName : String): R = {
val n = System.nanoTime()
val result = block
val n1 = System.nanoTime()
println(s"Elapsed time: ${TimeUnit.MILLISECONDS.convert(n1 - n,TimeUnit.NANOSECONDS)}ms - MethodName: ${methodName}")
result
}

Problem with spline method = 'monoH.FC''

I am interested in using the monotone spline, but I get an error when R tries to use it. I am using R 2.12.0, and the method 'monoH.FC' says that it has been supported since 2.8.0
Reproducible example (same result for more complicated (x,y) relationships)
x<-1:2
y<-1:2
spline(x,y,method="monoH.FC")
Error in spline(x, y, method = "monoH.FC") : invalid interpolation method
What I have tried
?spline returns:
...
Usage:
...
spline(x, y = NULL, n = 3*length(x), method = "fmm",
xmin = min(x), xmax = max(x), xout, ties = mean)
...
Arguments:
method: specifies the type of spline to be used. Possible values are
‘"fmm"’, ‘"natural"’, ‘"periodic"’ and ‘"monoH.FC"’.
...
But the spline function itself indicates that the 'monoH.FC' method is not supported:
...
method <- pmatch(method, c("periodic", "natural", "fmm"))
if (is.na(method))
stop("invalid interpolation method")
...
Question
How can I use method = 'monoH.FC' with spline?
Use splinefun; it supports method=monoH.FC.
The last example in ?spline shows you how to do it.
## An example of monotone interpolation
n <- 20
set.seed(11)
x. <- sort(runif(n)) ; y. <- cumsum(abs(rnorm(n)))
plot(x.,y.)
curve(splinefun(x.,y.)(x), add=TRUE, col=2, n=1001)
curve(splinefun(x.,y., method="mono")(x), add=TRUE, col=3, n=1001)
legend("topleft", paste("splinefun( \"", c("fmm", "monoH.CS"), "\" )", sep=''),
col=2:3, lty=1)