nlohmann json version 3.2.0, runtime error - json

I have several json files from a game I am modding that I want to be able to process using an external application.
as far as I can tell this code is within the specifications but I get a runtime error:
terminate called after throwing an instance of 'nlohmann::detail::type_error'
what(): [json.exception.type_error.304] cannot use at() with string
this is the stripped code to reproduce the error:
#include "json.hpp"
using json = nlohmann::json;
using namespace std;
namespace ns
{
class info
{
public:
std::string id;
};
void to_json(json& j, const info& mi);
void from_json(const json& j, info& mi);
}
int main()
{
json j ="[{\"id\": \"identifier\"}]";
ns::info info = j;
return 0;
}
void ns::to_json(json& j, const ns::info& mi)
{
j = json{
{"id",mi.id},
};
}
void ns::from_json(const json& j, ns::info& mi)
{
mi.id = j.at("id").get<std::string>();
}
and here is the compiler output:
-------------- Build: Debug in jsontest (compiler: GNU GCC Compiler)---------------
mingw32-g++.exe -Wall -std=c++11 -fexceptions -g -c C:\Users\UserOne\Documents\c++\jsontest\main.cpp -o obj\Debug\main.o
mingw32-g++.exe -o bin\Debug\jsontest.exe obj\Debug\main.o
Output file is bin\Debug\jsontest.exe with size 2.82 MB
Process terminated with status 0 (0 minute(s), 3 second(s))
0 error(s), 0 warning(s) (0 minute(s), 3 second(s))

There are two problems:
json j = "..."; initializes j with a JSON string value. It doesn't try to parse the contents. For that, you need to make it a json literal instead: json j = "..."_json;
After fixing that, you have a JSON array and but you're trying to access a field of a JSON object in ns::from_json().
So, fix both of those:
json j ="{\"id\": \"identifier\"}"_json;
and it'll work.
You might also consider using raw strings to avoid escaping all the quotes:
json j = R"({"id": "identifier"})"_json;
or just using an initializer list instead of parsing a string of json:
json j = { {"id", "identifier" } };
And if your source is a string being provided as a function argument or whatever, instead of a literal known at compile time:
std::string s = R"({"id": "identifier"})";
json j = json::parse(s);

Related

Support for std::tuple in swig?

When calling a swig generated function returning std::tuple, i get a swig object of that std::tuple.
Is there a way to use type-maps or something else to extract the values? I have tried changing the code to std::vector for a small portion of the code, and that works. (using %include <std_vector.i> and templates) But i don't want to make too many changes in the C++ part.
Edit: here is a minimal reproducible example:
foo.h
#pragma once
#include <tuple>
class foo
{
private:
double secret1;
double secret2;
public:
foo();
~foo();
std::tuple<double, double> return_thing(void);
};
foo.cpp
#include "foo.h"
#include <tuple>
foo::foo()
{
secret1 = 1;
secret2 = 2;
}
foo::~foo()
{
}
std::tuple<double, double> foo::return_thing(void) {
return {secret1, secret2};
}
foo.i
%module foo
%{
#include"foo.h"
%}
%include "foo.h"
When compiled on my linux using
-:$ swig -python -c++ -o foo_wrap.cpp foo.i
-:$ g++ -c foo.cpp foo_wrap.cpp '-I/usr/include/python3.8' '-fPIC' '-std=c++17' '-I/home/simon/Desktop/test_stack_overflow_tuple'
-:$ g++ -shared foo.o foo_wrap.o -o _foo.so
I can import it in python as shown:
test_module.ipynb
import foo as f
Foo = f.foo()
return_object = Foo.return_thing()
type(return_object)
print(return_object)
Outputs is
SwigPyObject
<Swig Object of type 'std::tuple< double,double > *' at 0x7fb5845d8420>
Hopefully this is more helpful, thank you for responding
To clarify i want to be able to use the values in python something like this:
main.cpp
#include "foo.h"
#include <iostream>
//------------------------------------------------------------------------------'
using namespace std;
int main()
{
foo Foo = foo();
auto [s1, s2] = Foo.return_thing();
cout << s1 << " " << s2 << endl;
}
//------------------------------------------------------------------------------
Github repo if anybody is interested
https://github.com/simon-cmyk/test_stack_overflow_tuple
Our goal is to make something like the following SWIG interface work intuitively:
%module test
%include "std_tuple.i"
%std_tuple(TupleDD, double, double);
%inline %{
std::tuple<double, double> func() {
return std::make_tuple(0.0, 1.0);
}
%}
We want to use this within Python in the following way:
import test
r=test.func()
print(r)
print(dir(r))
r[1]=1234
for x in r:
print(x)
i.e. indexing and iteration should just work.
By re-using some of the pre-processor tricks I used to wrap std::function (which were themselves originally from another answer here on SO) we can define a neat macro that "just wraps" std::tuple for us. Although this answer is Python specific it should in practice be fairly simple to adapt for most other languages too. I'll post my std_tuple.i file, first and then annotate/explain it after:
// [1]
%{
#include <tuple>
#include <utility>
%}
// [2]
#define make_getter(pos, type) const type& get##pos() const { return std::get<pos>(*$self); }
#define make_setter(pos, type) void set##pos(const type& val) { std::get<pos>(*$self) = val; }
#define make_ctorargN(pos, type) , type v##pos
#define make_ctorarg(first, ...) const first& v0 FOR_EACH(make_ctorargN, __VA_ARGS__)
// [3]
#define FE_0(...)
#define FE_1(action,a1) action(0,a1)
#define FE_2(action,a1,a2) action(0,a1) action(1,a2)
#define FE_3(action,a1,a2,a3) action(0,a1) action(1,a2) action(2,a3)
#define FE_4(action,a1,a2,a3,a4) action(0,a1) action(1,a2) action(2,a3) action(3,a4)
#define FE_5(action,a1,a2,a3,a4,a5) action(0,a1) action(1,a2) action(2,a3) action(3,a4) action(4,a5)
#define GET_MACRO(_1,_2,_3,_4,_5,NAME,...) NAME
%define FOR_EACH(action,...)
GET_MACRO(__VA_ARGS__, FE_5, FE_4, FE_3, FE_2, FE_1, FE_0)(action,__VA_ARGS__)
%enddef
// [4]
%define %std_tuple(Name, ...)
%rename(Name) std::tuple<__VA_ARGS__>;
namespace std {
struct tuple<__VA_ARGS__> {
// [5]
tuple(make_ctorarg(__VA_ARGS__));
%extend {
// [6]
FOR_EACH(make_getter, __VA_ARGS__)
FOR_EACH(make_setter, __VA_ARGS__)
size_t __len__() const { return std::tuple_size<std::decay_t<decltype(*$self)>>{}; }
%pythoncode %{
# [7]
def __getitem__(self, n):
if n >= len(self): raise IndexError()
return getattr(self, 'get%d' % n)()
def __setitem__(self, n, val):
if n >= len(self): raise IndexError()
getattr(self, 'set%d' % n)(val)
%}
}
};
}
%enddef
This is just the extra includes we need for our macro to work
These apply to each of the type arguments we supply to our %std_tuple macro invocation, we need to be careful with commas here to keep the syntax correct.
This is the mechanics of our FOR_EACH macro, which invokes each action per argument in our variadic macro argument list
Finally the definition of %std_tuple can begin. Essentially this is manually doing the work of %template for each specialisation of std::tuple we care to name inside of the std namespace.
We use our macro for each magic to declare a constructor with arguments for each element of the correct type. The actual implementation here is the default one from the C++ library which is exactly what we need/want though.
We use our FOR_EACH macro twice to make a member function get0, get1, getN of the correct type of each tuple element and the correct number of them for the template argument size. Likewise for setN. Doing it this way allows the usual SWIG typemaps for double, etc. or whatever types your tuple contains to be applied automatically and correctly for each call to std::get<N>. These are really just an implementation detail, not intended to be part of the public interface, but exposing them makes no real odds.
Finally we need an implementation of __getitem__ and a corresponding __setitem__. These simply look up and call the right getN/setN function on the class and call that instead. We take care to raise IndexError instead of the default exception if an invalid index is used as this will stop iteration correctly when we try to iterate of the tuple.
This is then sufficient that we can run our target code and get the following output:
$ swig3.0 -python -c++ -Wall test.i && g++ -shared -o _test.so test_wrap.cxx -I/usr/include/python3.7 -m32 && python3.7 run.py
<test.TupleDD; proxy of <Swig Object of type 'std::tuple< double,double > *' at 0xf766a260> >
['__class__', '__del__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattr__', '__getattribute__', '__getitem__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__len__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__setitem__', '__sizeof__', '__str__', '__subclasshook__', '__swig_destroy__', '__swig_getmethods__', '__swig_setmethods__', '__weakref__', 'get0', 'get1', 'set0', 'set1', 'this']
0.0
1234.0
Generally this should work as you'd hope in most input/output situations in Python.
There are a few improvements we could look to make:
Implement repr
Implement slicing so that tuple[n:m] type indexing works
Handle unpacking like Python tuples.
Maybe do some more automatic conversions for compatible types?
Avoid calling __len__ for every get/setitem call, either by caching the value in the class itself, or postponing it until the method lookup fails?

Calling python function fetching 'c' like struct using cython and ctypes interface in c program

I am new to cython/ctypes, and i m trying to call python function from c program using cython interface, but the data is either empty or not correct. Here is the sample program
python function
$ cat c_struct.py
#!/usr/bin/env python2.7
from ctypes import *
class Request(Structure):
_pack_ = 1
_fields_ = [
('type', c_ubyte),
('subtype', c_ubyte),
('action', c_ubyte),
('checksum', c_ushort)
]
def __repr__(self):
return "'type': {}, 'subtype': {}, 'action': {}, 'checksum': {}".format(self.type,
self.subtype, self.action, self.checksum)
req_msg = Request()
def get_message(typ):
if typ == 1:
req_msg.type = 12
req_msg.subtype = 2
req_msg.action = 3
req_msg.checksum = 0x1234
return req_msg
else:
return "String object From Python"
cython wrapper
$ cat caller.pyx
import sys
sys.path.insert(0, '')
from c_struct import get_message
cdef public const void* get_data_frm_python(int typ):
data = get_message(typ)
print "Printing in Cython -",data
return <const void*>data
and finally my 'C' caller
cat main.c
#include <Python.h>
#include "caller.h"
#include <stdio.h>
typedef struct {
uint8_t type;
uint8_t subtype;
uint8_t action;
uint16_t checksum;
} __attribute__ ((packed)) request_t;
int
main()
{
PyImport_AppendInittab("caller", initcaller);
Py_Initialize();
PyImport_ImportModule("caller");
const char* str = get_data_frm_python(2);
printf("Printing in C - %s\n", str);
const request_t* reqP = (request_t*)get_data_frm_python(1);
printf("Printing in C - %u, %u, %u, %u\n", reqP->type, reqP->subtype, reqP->action, reqP->checksum);
return 0;
}
and a simple makefile to build it
$ cat Makefile
target = main
cy_interface = caller
CY := cython
PYTHONINC := $(shell python-config --includes)
CFLAGS := -Wall $(PYTHONINC) -fPIC -O0 -ggdb3
LDFLAGS := $(shell python-config --ldflags)
CC=gcc
all: $(target)
%.c: %.pyx
$(CY) $+
%.o: %.c
$(CC) -fPIC $(CFLAGS) -c $+
$(target): $(cy_interface).o $(target).o
$(CC) $(CFLAGS) -o $# $^ $(LDFLAGS)
And finally the output :(
$ ./main
Printing in Cython - String object From Python
Printing in C -
Printing in Cython - 'type': 12, 'subtype': 2, 'action': 3, 'checksum': 4660
Printing in C - 1, 0, 0, 0
Can someone please help me in to understand on what am i doing wrong ?
Note :- If i change void* to char* atleast the string data is fetched properly, but segfaults for struct
$ cat caller.pyx
import sys
sys.path.insert(0, '')
from c_struct import get_message
cdef public const void* get_data_frm_python(int typ):
data = get_message(typ)
print "Printing in Cython -",data
return <const char*>data
$ ./main
Printing in Cython - String object From Python
Printing in C - String object From Python
Printing in Cython - 'type': 12, 'subtype': 2, 'action': 3, 'checksum': 4660
TypeError: expected string or Unicode object, Request found
Exception TypeError: 'expected string or Unicode object, Request found' in 'caller.get_data_frm_python' ignored
Segmentation fault
<const void*>data
data is a Python object (a PyObject*). This line casts that PyObject* to void. It definitely isn't reasonable to interpret a PyObject* as either a string or a request*.
<const char*>data
This gets a pointer to the first element of a Python string (which your struct isn't). The pointer is only valid while data is still alive. Unfortunately data is a function local variable so may be almost instantly freed. It's difficult to tell exactly what's going on here but it's definitely wrong.
I don't really understand what you're trying to achieve here so I'm struggling to give advice on how to do it better. Maybe drop ctypes and use Cython with a C union?
Anyways this did the trick for me, not a perfect one but something.
$ cat caller.pyx
import sys
sys.path.insert(0, '')
from libc.stdint cimport uintptr_t
from c_struct import get_message
cdef public const void* get_data_frm_python(int typ):
data = get_message(typ)
if isinstance(data, str):
return <const char*>(data)
cdef uintptr_t ptr = <uintptr_t>(get_message(typ))
return <const void*>(ptr)
def get_message(typ):
if typ == 1:
req_msg.type = 12
req_msg.subtype = 2
req_msg.action = 3
req_msg.checksum = 0x1234
return addressof(req_msg)
else:
return "String object From Python"
$ ./main
Printing in C - String object From Python
Printing in C - 12, 2, 3, 4660

systemtap userspace function tracing

I have a simple c++ program
main.cpp
#include <iostream>
using namespace std;
int addition (int a, int b)
{
int r;
r=a+b;
return r;
}
int main ()
{
int z;
z = addition (5,3);
cout << "The result is " << z;
}
I want to generate the function tracing for this
- print function names and its input output and return types
My systemtap script : para-callgraph.stp
#! /usr/bin/env stap
function trace(entry_p, extra) {
%( $# > 1 %? if (tid() in trace) %)
printf("%s%s%s %s\n",
thread_indent (entry_p),
(entry_p>0?"->":"<-"),
probefunc (),
extra)
}
probe $1.call { trace(1, $$parms) }
probe $1.return { trace(-1, $$return) }
My C++ Exec is called : a ( compiled as g++ -g main.cpp)
Command I run
stap para-callgraph.stp 'process("a").function("*")' -c "./a > /dev/null"
0 a(15119):->_GLOBAL__I__Z8additionii
27 a(15119): ->__static_initialization_and_destruction_0 __initialize_p=0x0 __priority=0x0
168 a(15119): <-__static_initialization_and_destruction_0
174 a(15119):<-_GLOBAL__I__Z8additionii
0 a(15119):->main
18 a(15119): ->addition a=0x0 b=0x400895
30 a(15119): <-addition return=0x8
106 a(15119):<-main return=0x0
Here ->addition a=0x0 b=0x400895 : its address and not actual values ie 5, 3 which I want.
How to modify my stap script?
This appears to be a systemtap bug. It should print the value of b, not its address. Please report it to the systemtap#sourceware.org mailing list (with compiler/etc. versions and other info, as outlined in man error::reporting.
As to changing the script, the $$parms part is where the local variables are being transformed into a pretty-printed string. It could be changed to something like...
trace(1, $$parms . (#defined($foobar) ? (" foobar=".$foobar$) : ""))
to append foobar=XYZ to the trace record, whereever a parameter foobar is available. To work around the systemtap bug in question, you could try
trace(1, $$parms . (#defined($b) ? (" *b=".user_int($b)) : ""))
to dereference the b variable as if it were an int *.

Boost read_json and C++11

I'm trying to parse JSON using Boost's property_tree parser and from C++11 code (my system is Debian Wheezy with gcc 4.7.2 and Boost 1.49). I tried the following code based on Serializing and deserializing json with boost:
#include <map>
#include <sstream>
#include <boost/property_tree/ptree.hpp>
#include <boost/property_tree/json_parser.hpp>
using boost::property_tree::ptree; using boost::property_tree::read_json; using boost::property_tree::write_json;
void example() {
// Write json.
ptree pt;
pt.put ("foo", "bar");
std::ostringstream buf;
write_json (buf, pt, false);
std::string json = buf.str(); // {"foo":"bar"}
// Read json.
ptree pt2;
std::istringstream is (json);
read_json (is, pt2);
std::string foo = pt2.get<std::string> ("foo");
}
If I compile this with g++ -std=c++03 -c' everything is fine. However, I also want to use C++11 features (which the code in the linked thread actually does!). But withg++ -std=c++11 -c' I get compile errors:
In file included from /usr/include/boost/property_tree/json_parser.hpp:14:0,
from test.cpp:4:
/usr/include/boost/property_tree/detail/json_parser_read.hpp: In instantiation of ‘void boost::property_tree::json_parser::context<Ptree>::a_literal_val::operator() (boost::property_tree::json_parser::context<Ptree>::It, boost::property_tree::json_parser::context<Ptree>::It) const [with Ptree = boost::property_tree::basic_ptree<std::basic_string<char>, std::basic_string<char> >; boost::property_tree::json_parser::context<Ptree>::It = __gnu_cxx::__normal_iterator<char*, std::vector<char, std::allocator<char> > >]’:
/usr/include/boost/spirit/home/classic/core/scanner/scanner.hpp:148:13: required from ‘static void boost::spirit::classic::attributed_action_policy<boost::spirit::classic::nil_t>::call(const ActorT&, boost::spirit::classic::nil_t, const IteratorT&, const IteratorT&) [with ActorT = boost::property_tree::json_parser::context<boost::property_tree::basic_ptree<std::basic_string<char>, std::basic_string<char> > >::a_literal_val; IteratorT = __gnu_cxx::__normal_iterator<char*, std::vector<char, std::allocator<char> > >]’
/usr/include/boost/spirit/home/classic/core/scanner/scanner.hpp:163:13: required from ‘void boost::spirit::classic::action_policy::do_action(const ActorT&, AttrT&, const IteratorT&, const IteratorT&) const [with ActorT = boost::property_tree::json_parser::context<boost::property_tree::basic_ptree<std::basic_string<char>, std::basic_string<char> > >::a_literal_val; AttrT = boost::spirit::classic::nil_t; IteratorT = __gnu_cxx::__normal_iterator<char*, std::vector<char, std::allocator<char> > >]’
...
test.cpp:20:1: required from here
/usr/include/boost/property_tree/detail/json_parser_read.hpp:105:17: error: no matching function for call to ‘boost::property_tree::basic_ptree<std::basic_string<char>, std::basic_string<char> >::push_back(std::pair<std::basic_string<char>, std::basic_string<char> >)’
/usr/include/boost/property_tree/detail/json_parser_read.hpp:105:17: note: candidate is:
In file included from /usr/include/boost/property_tree/ptree.hpp:516:0,
from test.cpp:3:
/usr/include/boost/property_tree/detail/ptree_implementation.hpp:362:9: note: boost::property_tree::basic_ptree<Key, Data, KeyCompare>::iterator boost::property_tree::basic_ptree<Key, Data, KeyCompare>::push_back(const value_type&) [with Key = std::basic_string<char>; Data = std::basic_string<char>; KeyCompare = std::less<std::basic_string<char> >; boost::property_tree::basic_ptree<Key, Data, KeyCompare>::value_type = std::pair<const std::basic_string<char>, boost::property_tree::basic_ptree<std::basic_string<char>, std::basic_string<char> > >]
/usr/include/boost/property_tree/detail/ptree_implementation.hpp:362:9: note: no known conversion for argument 1 from ‘std::pair<std::basic_string<char>, std::basic_string<char> >’ to ‘const value_type& {aka const std::pair<const std::basic_string<char>, boost::property_tree::basic_ptree<std::basic_string<char>, std::basic_string<char> > >&}’
How can I use Boost's read_json with C++11? Do I need a newer Boost version for this (i. e. install manually from source instead of using Wheezy's packaged one)? Is there something wrong in my code? Or is this simply not possible?
It is a known bug of older Boost versions.
You can fix it by applying the following patch:
--- json_parser_read.hpp 2013-09-01 03:55:57.000000000 +0400
+++ /usr/include/boost/property_tree/detail/json_parser_read.hpp 2013-09-01 03:56:21.000000000 +0400
## -102,7 +102,7 ##
void operator()(It b, It e) const
{
BOOST_ASSERT(c.stack.size() >= 1);
- c.stack.back()->push_back(std::make_pair(c.name, Str(b, e)));
+ c.stack.back()->push_back(std::make_pair(c.name, Ptree(Str(b, e))));
c.name.clear();
c.string.clear();
}
or with
sed -i -e 's/std::make_pair(c.name, Str(b, e))/std::make_pair(c.name, Ptree(Str(b, e)))/' json_parser_read.hpp

Template parse error compiling with thrust

I am trying to compile some code which allows some CPU routines to call a function which uses the GPU to speed up some calculations. The GPU code uses Thrust, specifically reduce and device_ptr. When I build the GPU code as a standalone using nvcc, there are no problems. However, attempting to integrate the GPU code with CPU C++ code causes the following compiler error when compiling the final "wrapper":
nvcc -O2 -c NLC_2D_TFIM.cpp -lcuda -lcudart -lcublas -lcusparse -L../CUDA/Lanczos/sort/sort/gnu/release -lmgpusort
In file included from /usr/local/cuda/bin/../include/thrust/pair.h:265:0,
from /usr/local/cuda/bin/../include/thrust/tuple.h:35,
from /usr/local/cuda/bin/../include/thrust/detail/functional/actor.h:29,
from /usr/local/cuda/bin/../include/thrust/detail/functional/placeholder.h:20,
from /usr/local/cuda/bin/../include/thrust/functional.h:26,
from /usr/local/cuda/bin/../include/thrust/system/detail/error_category.inl:22,
from /usr/local/cuda/bin/../include/thrust/system/error_code.h:516,
from /usr/local/cuda/bin/../include/thrust/system/cuda_error.h:26,
from /usr/local/cuda/bin/../include/thrust/detail/backend/cuda/malloc.inl:26,
from /usr/local/cuda/bin/../include/thrust/detail/backend/cuda/malloc.h:50,
from /usr/local/cuda/bin/../include/thrust/detail/backend/dispatch/malloc.h:22,
from /usr/local/cuda/bin/../include/thrust/detail/device_malloc.inl:23,
from /usr/local/cuda/bin/../include/thrust/device_malloc.h:102,
from /usr/local/cuda/bin/../include/thrust/detail/backend/internal_allocator.h:22,
from /usr/local/cuda/bin/../include/thrust/detail/uninitialized_array.h:23,
from /usr/local/cuda/bin/../include/thrust/detail/backend/cuda/copy_cross_space.inl:20,
from /usr/local/cuda/bin/../include/thrust/detail/backend/cuda/copy_cross_space.h:57,
from /usr/local/cuda/bin/../include/thrust/detail/backend/cuda/dispatch/copy.h:23,
from /usr/local/cuda/bin/../include/thrust/detail/backend/cuda/copy.h:21,
from /usr/local/cuda/bin/../include/thrust/detail/backend/dispatch/copy.h:24,
from /usr/local/cuda/bin/../include/thrust/detail/backend/copy.inl:20,
from /usr/local/cuda/bin/../include/thrust/detail/backend/copy.h:44,
from /usr/local/cuda/bin/../include/thrust/detail/copy.inl:20,
from /usr/local/cuda/bin/../include/thrust/detail/copy.h:39,
from /usr/local/cuda/bin/../include/thrust/detail/reference_base.inl:18,
from /usr/local/cuda/bin/../include/thrust/detail/reference_base.h:138,
from /usr/local/cuda/bin/../include/thrust/device_reference.h:27,
from /usr/local/cuda/bin/../include/thrust/detail/device_ptr.inl:23,
from /usr/local/cuda/bin/../include/thrust/device_ptr.h:181,
from ../CUDA/Lanczos/hamiltonian.h:32,
from ../CUDA/Lanczos/lanczos.h:8,
from NLC_2D_TFIM.cpp:17:
/usr/local/cuda/bin/../include/thrust/detail/pair.inl: In function ‘bool thrust::operator<(const thrust::pair<T1, T2>&, const thrust::pair<T1, T2>&)’:
/usr/local/cuda/bin/../include/thrust/detail/pair.inl:72:22: error: ‘.’ cannot appear in a constant-expression
/usr/local/cuda/bin/../include/thrust/detail/pair.inl:72:46: error: ‘.’ cannot appear in a constant-expression
/usr/local/cuda/bin/../include/thrust/detail/pair.inl:72:36: error: parse error in template argument list
/usr/local/cuda/bin/../include/thrust/detail/pair.inl:72:36: error: ‘.’ cannot appear in a constant-expression
/usr/local/cuda/bin/../include/thrust/detail/pair.inl:72:58: error: ‘.’ cannot appear in a constant-expression
/usr/local/cuda/bin/../include/thrust/detail/pair.inl:72:69: error: ‘.’ cannot appear in a constant-expression
/usr/local/cuda/bin/../include/thrust/detail/pair.inl:72:12: error: parse error in template argument list
make: *** [NLC_2D_TFIM.o] Error 1
NLC_2D_TFIM works with another module (Graphs) which uses std::pairs, but none of these are passed to the module which talks to the GPU. Every header uses std as its namespace, not thrust. All the parameters I'm passing to the GPU handler are regular C arrays, ints, etc.
The lines referenced above are:
#include"lanczos.h"
Which uses:
#include"hamiltonian.h"
And then from there:
#include<thrust/device_ptr.h>
In NLC_2D_TFIM.cu, the "wrapper", the code is:
ReadGraphsFromFile(fileGraphs, "rectanglegraphs.dat", TypeFlag); //graphs the information generated by the Graphs module
double J=1.;
for(int hh=1; hh<10; hh++) {
h = hh;
//Create some storage for things to be used in GPU functions
d_hamiltonian* HamilLancz = (d_hamiltonian*)malloc(HowMany*sizeof(d_hamiltonian));
parameters* data = (parameters*)malloc(HowMany*sizeof(parameters));
double** groundstates = (double**)malloc(HowMany*sizeof(double*));
double** eigenvalues = (double**)malloc(HowMany*sizeof(double*));
int* NumElem = (int*)malloc(HowMany*sizeof(int));
int** Bonds = (int**)malloc(HowMany*sizeof(int*));
//Go through each graph we read in earlier
unsigned int i = 1;
while ( i<fileGraphs.size() && fileGraphs.at(i)->Order < 14) { //skip the zeroth graph
//CPU gets the energy for smaller graphs
GENHAM HV(fileGraphs.at(i)->Order, J, h, fileGraphs.at(i)->AdjacencyList, TypeFlag);
LANCZOS lancz(HV.Vdim); //dimension of reduced Hilbert space (Sz sector)
HV.SparseHamJQ(); //generates sparse matrix Hamiltonian for Lanczos
energy = lancz.Diag(HV, 1, prm.valvec_, eVec);
i++;
}
if( argv[0] == "--gpu" || argv[0] == "-g" )
{
while ( i < fileGraphs.size() )
{
i += 30;
for( int j = 0; j < HowMany; j++)
{
Bonds[ j ] = (int*)malloc(sizeof(int)*3*fileGraphs.at(i - j)->Order);
for(unsigned int k = 0; k < fileGraphs.at(i - j)->Order; k++)
{
Bonds[ j ][ k ] = k;
Bonds[ j ][ k + fileGraphs.at(i - j)->Order ] = fileGraphs.at(i - j)->AdjacencyList.at(2*k).second;
Bonds[ j ][ k + 2*fileGraphs.at(i - j)->Order ] = fileGraphs.at(i - j)->AdjacencyList.at(2*k + 1).second;
}
data[ j ].Sz = 0;
data[ j ].dimension = 2;
data[ j ].J1 = J;
data[ j ].J2 = h;
data[ j ].modelType = 2;
eigenvalues[ j ] = (double*)malloc(sizeof(double));
}
//Calls the CPU functions which will talk to the GPU, including Thrust
ConstructSparseMatrix(HowMany, Bonds, HamilLancz, data, NumElem, 1);
lanczos(HowMany, NumElem, HamilLancz, groundstates, eigenvalues, 200, 1, 1e-12);
So there's nothing with an std::pair that's getting passed to the GPU. Here are the thrust calls:
for(int i = 0; i < howMany; i++)
{
thrust::device_ptr<int> red_ptr(d_H[i].set);
numElem[i] = thrust::reduce(red_ptr, red_ptr + rawSize[i]);
}
I'm not sure this is the right answer but if your file extension is cpp doesn't nvcc just pass it to the regular c++ compiler? What happens if you rename the file .cu?
(Also I am not sure if having -c and all the libraries in the same compile command is needed - -c usually suggests no linking is done.)
It turned out that problem was that I was linking against code using Blitz. Removing all the Blitz data structures and the include statements for it cleared up my compilation problem. Blitz uses its own namespace, so perhaps something in there was conflicting with thrust or there is a missing } or > somewhere.