Google OR Tools, "returned a result with an exception set" Error on my server - exception

I am using ORTools for path optimization. When I try my code in localhost it returns the expected result. However, when I try it on my DigitalOcean App it gives an error message:" returned a result with an exception set". Can anyone help me to find a solution ?
Here is the TraceBack code from Postman if it is useful:
The above exception ('numpy.float64' object cannot be interpreted as an integer) was the direct cause of the following exception:
Local vars
/workspace/.heroku/python/lib/python3.10/site-packages/ortools/constraint_solver/pywrapcp.py, line 5730, in GetArcCostForVehicle
return _pywrapcp.RoutingModel_CostVar(self)
def GetArcCostForVehicle(self, from_index: "int64_t", to_index: "int64_t", vehicle: "int64_t") -> "int64_t":
r"""
Returns the cost of the transit arc between two nodes for a given vehicle.
Input are variable indices of node. This returns 0 if vehicle < 0.
"""
return _pywrapcp.RoutingModel_GetArcCostForVehicle(self, from_index, to_index, vehicle) …

Please check that all your transit callbacks are returning an int type.
In your code you must have registered a transit callback:
# Create and register a transit callback.
def distance_callback(from_index, to_index):
"""Returns the distance between the two nodes."""
# Convert from routing variable Index to distance matrix NodeIndex.
from_node = manager.IndexToNode(from_index)
to_node = manager.IndexToNode(to_index)
return data['distance_matrix'][from_node][to_node]
transit_callback_index = routing.RegisterTransitCallback(distance_callback)
then you have passed this transit_callback_index to setup the arc cost evaluator function.
# Define cost of each arc.
routing.SetArcCostEvaluatorOfAllVehicles(transit_callback_index)
The point is, the function distance_callback() will be call internally by the C++ library (python is just a wrapper on top of the C++ library). The C++ expect a C++ int64_t returned by the distance_callback() wrapper function (i.e. the python-C++ bridge need an int from the python method to be able to convert it to a C++ int64_t).
If your method return a numpy.float64 (e.g. because you r distance matrix data are an numpy array full of floating point values) then everything blow up thus the error message...

Related

Can you help me make sense of this class constructor? (Adafruit_ATParser)

I am building a device for my research team. To briefly describe it, this device uses a motor and load sensor connected to an Arduino to apply a rotational force to a corn stalk and record the resistance of the stalk. We are in the process of building Bluetooth into the device. We are using this BT module.
We have a BLE GATT Service with 2 characteristics for storing DATA and 1 for holding the command which is an integer that will be read by the device and acted on. Reading the command characteristic is where we encounter our problem.
void get_input(){
uint16_t bufSize = 15;
char inputBuffer[bufSize];
bleParse = Adafruit_ATParser(); // Throws error: bleParse was not declared in this scope
bleParse.atcommandStrReply("AT+GATTCHAR=3",&inputBuffer,bufSize,1000);
Serial.print("input:");
Serial.println(inputBuffer);
}
The functions I am trying to use are found in the library for the module in Adarfruit_ATParser.cpp
/******************************************************************************/
/*!
#brief Constructor
*/
/******************************************************************************/
Adafruit_ATParser::Adafruit_ATParser(void)
{
_mode = BLUEFRUIT_MODE_COMMAND;
_verbose = false;
}
******************************************************************************/
/*!
#brief Send an AT command and get multiline string response into
user-provided buffer.
#param[in] cmd Command
#param[in] buf Provided buffer
#param[in] bufsize buffer size
#param[in] timeout timeout in milliseconds
*/
/******************************************************************************/
uint16_t Adafruit_ATParser::atcommandStrReply(const char cmd[], char* buf, uint16_t bufsize, uint16_t timeout)
{
uint16_t result_bytes;
uint8_t current_mode = _mode;
// switch mode if necessary to execute command
if ( current_mode == BLUEFRUIT_MODE_DATA ) setMode(BLUEFRUIT_MODE_COMMAND);
// Execute command with parameter and get response
println(cmd);
result_bytes = this->readline(buf, bufsize, timeout, true);
// switch back if necessary
if ( current_mode == BLUEFRUIT_MODE_DATA ) setMode(BLUEFRUIT_MODE_DATA);
return result_bytes;
}
None of the examples in the library use this. They all create their own parsers. For example, the neopixel_picker example sketch has a file called packetParser.cpp which I believe retrieves data from the BT module for that specific sketch, but it never includes or uses Adafruit_ATParser.. There are no examples of this constructor anywhere and I cannot figure out how to use it. I have tried these ways:
bleParse = Adafruit_ATParser();
Adafruit_ATParser bleParse();
Adafruit_ATParser();
ble.Adafruit_ATParser bleParse();
note: ble is an object that signifies a Serial connection between arduino and BT created with:
SoftwareSerial bluefruitSS = SoftwareSerial(BLUEFRUIT_SWUART_TXD_PIN, BLUEFRUIT_SWUART_RXD_PIN);
Adafruit_BluefruitLE_UART ble(bluefruitSS, BLUEFRUIT_UART_MODE_PIN,BLUEFRUIT_UART_CTS_PIN, BLUEFRUIT_UART_RTS_PIN);
Can anyone give me a clue on how to use the Adafruit_ATParser() constructor? Also, if the constructor has no reference to the ble object, how does it pass AT commands to the BT module?
I know this is a big ask, I appreciate any input you can give me.
Like this
Adafruit_ATParser bleParse;
You were closest with this one Adafruit_ATParser bleParse();. This is a common beginner mistake because it looks right. Unfortunately it declares a function bleParse which takes no arguments and returns a Adafruit_ATParser object.
I can't answer the second question.
EDIT
I've taken the time to have a look at the code. This is what I found
class Adafruit_BluefruitLE_UART : public Adafruit_BLE
{
and
class Adafruit_BLE : public Adafruit_ATParser
{
what this means is that the Adafruit_BluefruitLE_UART class is derived from the Adafruit_BLE class which in turn is derived from the Adafruit_ATParser class. Derivation means that any public methods in Adafruit_BLE can also be used on a Adafruit_BluefruitLE_UART object. You already have an Adafruit_BluefruitLE_UART object (you called it ble) so you can just use the method you want to use on that object.
SoftwareSerial bluefruitSS = SoftwareSerial(BLUEFRUIT_SWUART_TXD_PIN, BLUEFRUIT_SWUART_RXD_PIN);
Adafruit_BluefruitLE_UART ble(bluefruitSS, BLUEFRUIT_UART_MODE_PIN,BLUEFRUIT_UART_CTS_PIN, BLUEFRUIT_UART_RTS_PIN);
ble.atcommandStrReply( ... );

Python objects in dealloc in cython

In the docs it is written, that "Any C data that you explicitly allocated (e.g. via malloc) in your __cinit__() method should be freed in your __dealloc__() method."
This is not my case. I have following extension class:
cdef class SomeClass:
cdef dict data
cdef void * u_data
def __init__(self, data_len):
self.data = {'columns': []}
if data_len > 0:
self.data.update({'data': deque(maxlen=data_len)})
else:
self.data.update({'data': []})
self.u_data = <void *>self.data
#property
def data(self):
return self.data
#data.setter
def data(self, new_val: dict):
self.data = new_val
Some c function has an access to this class and it appends some data to SomeClass().data dict. What should I write in __dealloc__, when I want to delete the instance of the SomeClass()?
Maybe something like:
def __dealloc__(self):
self.data = None
free(self.u_data)
Or there is no need to dealloc anything at all?
No you don't need to and no you shouldn't. From the documentation
You need to be careful what you do in a __dealloc__() method. By the time your __dealloc__() method is called, the object may already have been partially destroyed and may not be in a valid state as far as Python is concerned, so you should avoid invoking any Python operations which might touch the object. In particular, don’t call any other methods of the object or do anything which might cause the object to be resurrected. It’s best if you stick to just deallocating C data.
You don’t need to worry about deallocating Python attributes of your object, because that will be done for you by Cython after your __dealloc__() method returns.
You can confirm this by inspecting the C code (you need to look at the full code, not just the annotated HTML). There's an autogenerated function __pyx_tp_dealloc_9someclass_SomeClass (name may vary slightly depending on what you called your module) does a range of things including:
__pyx_pw_9someclass_9SomeClass_3__dealloc__(o);
/* some other code */
Py_CLEAR(p->data);
where the function __pyx_pw_9someclass_9SomeClass_3__dealloc__ is (a wrapper for) your user-defined __dealloc__. Py_CLEAR will ensure that data is appropriately reference-counted then set to NULL.
It's a little hard to follow because it all goes through several layers of wrappers, but you can confirm that it does what the documentation says.

How can I use SWIG to handle a JAVA to C++ call with a pointer-to-pointer argout argument?

The problem involved a JAVA call to a C-function (API) which returned a pointer-to-pointer as an argout argument. I was trying to call the C API from JAVA and I had no way to modify the API.
Using SWIG typemap to pass pointer-to-pointer:
Here is another approach using typemaps. It is targetting Perl, not Java, but the concepts are the same. And I finally managed to get it working using typemaps and no helper functions:
For this function:
typedef void * MyType;
int getblock( int a, int b, MyType *block );
I have 2 typemaps:
%typemap(perl5, in, numinputs=0) void ** data( void * scrap )
{
$1 = &scrap;
}
%typemap(perl5, argout) void ** data
{
SV* tempsv = sv_newmortal();
if ( argvi >= items ) EXTEND(sp,1);
SWIG_MakePtr( tempsv, (void *)*$1, $descriptor(void *), 0);
$result = tempsv;
argvi++;
}
And the function is defined as:
int getblock( int a, int b, void ** data );
In my swig .i file. Now, this passes back an opaque pointer in the argout typemap, becaust that's what useful for this particular situation, however, you could replace the SWIG_MakePtr line with stuff to actually do stuff with the data in the pointer if you wanted to. Also, when I want to pass the pointer into a function, I have a typemap that looks like this:
%typemap(perl5, in) void * data
{
if ( !(SvROK($input)) croak( "Not a reference...\n" );
if ( SWIG_ConvertPtr($input, (void **) &$1, $1_descriptor, 0 ) == -1 )
croak( "Couldn't convert $1 to $1_descriptor\n");
}
And the function is defined as:
int useblock( void * data );
In my swig .i file.
Obviously, this is all perl, but should map pretty directly to Java as far as the typemap architecture goes. Hope it helps...
[Swig] Java: Using C helper function to pass pointer-to-pointer
The problem involved a JAVA call to a C-function (API) which returned a pointer-to-pointer as an argout argument. I was trying to call the C API from JAVA and I had no way to modify the API.
The API.h header file contained:
extern int ReadMessage(HEADER **hdr);
The original C-call looked like:
HEADER *hdr;
int status;
status = ReadMessage(&hdr);
The function of the API was to store data at the memory location specified by the pointer-to-pointer.
I tried to use SWIG to create the appropriate interface file. SWIG.i created the file SWIGTYPE_p_p_header.java from API.h. The problem is the SWIGTYPE_p_p_header constructor initialized swigCPtr to 0.
The JAVA call looked like:
SWIGTYPE_p_p_header hdr = new SWIGTYPE_p_p_header();
status = SWIG.ReadMessage(hdr);
But when I called the API from JAVA the ptr was always 0.
I finally gave up passing the pointer-to-pointer as an input argument. Instead I defined another C-function in SWIG.i to return the pointer-to-pointer in a return value. I thought it was a Kludge ... but it worked!
You may want to try this:
SWIG.i looks like:
// return pointer-to-pointer
%inline %{
HEADER *ReadMessageHelper() {
HEADER *hdr;
int returnValue;
returnValue = ReadMessage(&hdr);
if (returnValue!= 1) hdr = NULL;
return hdr;
}%}
The inline function above could leak memory as Java won't take ownership of the memory created by ReadMessageHelper, since the HEADER instance iscreated on the heap.
The fix for the memory leak is to define ReadMessageHelper as a newobject in order for Java to take control of the memory.
%newobject ReadMessageHelper();
JAVA call now would look like:
HEADER hdr;
hdr = SWIG.ReadMessageHelper();
If you are lucky, as I was, you may have another API available to release the message buffer. In which case, you wouldn’t have to do the previous step.
William Fulton, the SWIG guru, had this to say about the approach above:
“I wouldn't see the helper function as a kludge, more the simplest solution to a tricky problem. Consider what the equivalent pure 100% Java code would be for ReadMessage(). I don't think there is an equivalent as Java classes are passed by reference and there is no such thing as a reference to a reference, or pointer to a pointer in Java. In the C function you have, a HEADER instances is created by ReadMessage and passed back to the caller. I don't see how one can do the equivalent in Java without providing some wrapper class around HEADER and passing the wrapper to the ReadMessage function. At the end of the day, ReadMessage returns a newly created HEADER and the Java way of returning newly created objects is to return it in the return value, not via a parameter.”

Scala: val foo = (arg: Type) => {...} vs. def(arg:Type) = {...}

Related to this thread
I am still unclear on the distinction between these 2 definitions:
val foo = (arg: Type) => {...}
def(arg:Type) = {...}
As I understand it:
1) the val version is bound once, at compile time
a single Function1 instance is created
can be passed as a method parameter
2) the def version is bound anew on each call
new method instance created per call.
If the above is true, then why would one ever choose the def version in cases where the operation(s) to perform are not dependent on runtime state?
For example, in a servlet environment you might want to get the ip address of the connecting client; in this case you need to use a def as, of course there is no connected client at compile time.
On the other hand you often know, at compile time, the operations to perform, and can go with immutable val foo = (i: Type) => {...}
As a rule of thumb then, should one only use defs when there is a runtime state dependency?
Thanks for clarifying
I'm not entirely clear on what you mean by runtime state dependency. Both vals and defs can close over their lexical scope and are hence unlimited in this way. So what are the differences between methods (defs) and functions (as vals) in Scala (which has been asked and answered before)?
You can parameterize a def
For example:
object List {
def empty[A]: List[A] = Nil //type parameter alllowed here
val Empty: List[Nothing] = Nil //cannot create a type parameter
}
I can then call:
List.empty[Int]
But I would have to use:
List.Empty: List[Int]
But of course there are other reasons as well. Such as:
A def is a method at the JVM level
If I were to use the piece of code:
trades filter isEuropean
I could choose a declaration of isEuropean as either:
val isEuropean = (_ : Trade).country.region = Europe
Or
def isEuropean(t: Trade) = t.country.region = Europe
The latter avoids creating an object (for the function instance) at the point of declaration but not at the point of use. Scala is creating a function instance for the method declaration at the point of use. It is clearer if I had used the _ syntax.
However, in the following piece of code:
val b = isEuropean(t)
...if isEuropean is declared a def, no such object is being created and hence the code may be more performant (if used in very tight loops where every last nanosecond is of critical value)

What is Map/Reduce?

I hear a lot about map/reduce, especially in the context of Google's massively parallel compute system. What exactly is it?
From the abstract of Google's MapReduce research publication page:
MapReduce is a programming model and
an associated implementation for
processing and generating large data
sets. Users specify a map function
that processes a key/value pair to
generate a set of intermediate
key/value pairs, and a reduce function
that merges all intermediate values
associated with the same intermediate
key.
The advantage of MapReduce is that the processing can be performed in parallel on multiple processing nodes (multiple servers) so it is a system that can scale very well.
Since it's based from the functional programming model, the map and reduce steps each do not have any side-effects (the state and results from each subsection of a map process does not depend on another), so the data set being mapped and reduced can each be separated over multiple processing nodes.
Joel's Can Your Programming Language Do This? piece discusses how understanding functional programming was essential in Google to come up with MapReduce, which powers its search engine. It's a very good read if you're unfamiliar with functional programming and how it allows scalable code.
See also: Wikipedia: MapReduce
Related question: Please explain mapreduce simply
Map is a function that applies another function to all the items on a list, to produce another list with all the return values on it. (Another way of saying "apply f to x" is "call f, passing it x". So sometimes it sounds nicer to say "apply" instead of "call".)
This is how map is probably written in C# (it's called Select and is in the standard library):
public static IEnumerable<R> Select<T, R>(this IEnumerable<T> list, Func<T, R> func)
{
foreach (T item in list)
yield return func(item);
}
As you're a Java dude, and Joel Spolsky likes to tell GROSSLY UNFAIR LIES about how crappy Java is (actually, he's not lying, it is crappy, but I'm trying to win you over), here's my very rough attempt at a Java version (I have no Java compiler, and I vaguely remember Java version 1.1!):
// represents a function that takes one arg and returns a result
public interface IFunctor
{
object invoke(object arg);
}
public static object[] map(object[] list, IFunctor func)
{
object[] returnValues = new object[list.length];
for (int n = 0; n < list.length; n++)
returnValues[n] = func.invoke(list[n]);
return returnValues;
}
I'm sure this can be improved in a million ways. But it's the basic idea.
Reduce is a function that turns all the items on a list into a single value. To do this, it needs to be given another function func that turns two items into a single value. It would work by giving the first two items to func. Then the result of that along with the third item. Then the result of that with the fourth item, and so on until all the items have gone and we're left with one value.
In C# reduce is called Aggregate and is again in the standard library. I'll skip straight to a Java version:
// represents a function that takes two args and returns a result
public interface IBinaryFunctor
{
object invoke(object arg1, object arg2);
}
public static object reduce(object[] list, IBinaryFunctor func)
{
if (list.length == 0)
return null; // or throw something?
if (list.length == 1)
return list[0]; // just return the only item
object returnValue = func.invoke(list[0], list[1]);
for (int n = 1; n < list.length; n++)
returnValue = func.invoke(returnValue, list[n]);
return returnValue;
}
These Java versions need generics adding to them, but I don't know how to do that in Java. But you should be able to pass them anonymous inner classes to provide the functors:
string[] names = getLotsOfNames();
string commaSeparatedNames = (string)reduce(names,
new IBinaryFunctor {
public object invoke(object arg1, object arg2)
{ return ((string)arg1) + ", " + ((string)arg2); }
}
Hopefully generics would get rid of the casts. The typesafe equivalent in C# is:
string commaSeparatedNames = names.Aggregate((a, b) => a + ", " + b);
Why is this "cool"? Simple ways of breaking up larger calculations into smaller pieces, so they can be put back together in different ways, are always cool. The way Google applies this idea is to parallelization, because both map and reduce can be shared out over several computers.
But the key requirement is NOT that your language can treat functions as values. Any OO language can do that. The actual requirement for parallelization is that the little func functions you pass to map and reduce must not use or update any state. They must return a value that is dependent only on the argument(s) passed to them. Otherwise, the results will be completely screwed up when you try to run the whole thing in parallel.
After getting most frustrated with either very long waffley or very short vague blog posts I eventually discovered this very good rigorous concise paper.
Then I went ahead and made it more concise by translating into Scala, where I've provided the simplest case where a user simply just specifies the map and reduce parts of the application. In Hadoop/Spark, strictly speaking, a more complex model of programming is employed that require the user to explicitly specify 4 more functions outlined here: http://en.wikipedia.org/wiki/MapReduce#Dataflow
import scalaz.syntax.id._
trait MapReduceModel {
type MultiSet[T] = Iterable[T]
// `map` must be a pure function
def mapPhase[K1, K2, V1, V2](map: ((K1, V1)) => MultiSet[(K2, V2)])
(data: MultiSet[(K1, V1)]): MultiSet[(K2, V2)] =
data.flatMap(map)
def shufflePhase[K2, V2](mappedData: MultiSet[(K2, V2)]): Map[K2, MultiSet[V2]] =
mappedData.groupBy(_._1).mapValues(_.map(_._2))
// `reduce` must be a monoid
def reducePhase[K2, V2, V3](reduce: ((K2, MultiSet[V2])) => MultiSet[(K2, V3)])
(shuffledData: Map[K2, MultiSet[V2]]): MultiSet[V3] =
shuffledData.flatMap(reduce).map(_._2)
def mapReduce[K1, K2, V1, V2, V3](data: MultiSet[(K1, V1)])
(map: ((K1, V1)) => MultiSet[(K2, V2)])
(reduce: ((K2, MultiSet[V2])) => MultiSet[(K2, V3)]): MultiSet[V3] =
mapPhase(map)(data) |> shufflePhase |> reducePhase(reduce)
}
// Kinda how MapReduce works in Hadoop and Spark except `.par` would ensure 1 element gets a process/thread on a cluster
// Furthermore, the splitting here won't enforce any kind of balance and is quite unnecessary anyway as one would expect
// it to already be splitted on HDFS - i.e. the filename would constitute K1
// The shuffle phase will also be parallelized, and use the same partition as the map phase.
abstract class ParMapReduce(mapParNum: Int, reduceParNum: Int) extends MapReduceModel {
def split[T](splitNum: Int)(data: MultiSet[T]): Set[MultiSet[T]]
override def mapPhase[K1, K2, V1, V2](map: ((K1, V1)) => MultiSet[(K2, V2)])
(data: MultiSet[(K1, V1)]): MultiSet[(K2, V2)] = {
val groupedByKey = data.groupBy(_._1).map(_._2)
groupedByKey.flatMap(split(mapParNum / groupedByKey.size + 1))
.par.flatMap(_.map(map)).flatten.toList
}
override def reducePhase[K2, V2, V3](reduce: ((K2, MultiSet[V2])) => MultiSet[(K2, V3)])
(shuffledData: Map[K2, MultiSet[V2]]): MultiSet[V3] =
shuffledData.map(g => split(reduceParNum / shuffledData.size + 1)(g._2).map((g._1, _)))
.par.flatMap(_.map(reduce))
.flatten.map(_._2).toList
}
Map is a native JS method that can be applied to an array. It creates a new array as a result of some function mapped to every element in the original array. So if you mapped a function(element) { return element * 2;}, it would return a new array with every element doubled. The original array would go unmodified.
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/map
Reduce is a native JS method that can also be applied to an array. It applies a function to an array and has an initial output value called an accumulator. It loops through each element in the array, applies a function, and reduces them to a single value (which begins as the accumulator). It is useful because you can have any output you want, you just have to start with that type of accumulator. So if I wanted to reduce something into an object, I would start with an accumulator {}.
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/Reduce?v=a