Error loading large JSON files using Scala Play Framework 2 - json

I'm trying to use Apache Bench to load test a group of large (4MB each) JSON requests. When running with a large file and many concurrent requests I get the following error:
Exception caught in RequestBodyHandler java.nio.channels.ClosedChannelException: null
Here is my ab command:
ab -p large.json -n 1000 -c 10 http://127.0.0.1:9000/json-tests
If I run this with no concurrency and only 10 requests it works fine. Increasing the number of requests or the concurrency causes this error to occur over and over.
My controller currently has no logic in it:
def addJsonTest = Action {
Ok("OK")
}
Here is the full error:
[error] play - Exception caught in RequestBodyHandler
java.nio.channels.ClosedChannelException: null
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.setInterestOps(AbstractNioWorker.java:506) [netty-3.9.3.Final.jar:na]
at org.jboss.netty.channel.socket.nio.AbstractNioWorker$1.run(AbstractNioWorker.java:455) [netty-3.9.3.Final.jar:na]
at org.jboss.netty.channel.socket.ChannelRunnableWrapper.run(ChannelRunnableWrapper.java:40) [netty-3.9.3.Final.jar:na]
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.processTaskQueue(AbstractNioSelector.java:372) [netty-3.9.3.Final.jar:na]
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:296) [netty-3.9.3.Final.jar:na]
This is just using Play in development mode, is there any setup or configuration for having Play handle multiple large requests?
Thanks!

You need to do it reactively, with Iterators
val iteratee = Iteratee.foldM[Array[Byte], Either[Result, String]](Right("start")) { case (str, bytes) =>
Future.successful(Left(Ok))
}
val parser = BodyParser(rh => iteratee)
def eatDust = Action(parser) { req =>
Ok
}
See these links.
https://www.playframework.com/documentation/2.2.x/Iteratees
Play 2.x : Reactive file upload with Iteratees

Related

openApiGenerate don`t generate Models

I use gradle plugin
id "org.openapi.generator" version "5.1.1"
and task in gradle.plugin
openApiGenerate {
generatorName = "kotlin"
inputSpec = "$rootDir/src/main/resources/META-INF/resources/API.v1.yaml".toString()
outputDir = "$rootDir/generated".toString()
apiPackage = "org.openapi.example.api"
invokerPackage = "org.openapi.example.invoker"
modelPackage = "org.openapi.example.model"
configOptions = [
dateLibrary: "java8"
]
}
when I call gradlew openApiGenerate
I got
Execution failed for task ':openApiGenerate'.
> There were issues with the specification. The option can be disabled via validateSpec (Maven/Gradle) or --skip-validate-spec (CLI).
| Error count: 244, Warning count: 0
Errors:
-attribute paths.'/v1/pet/{name}/home/'(post).responses.406.content is unexpected
-attribute paths.'/v1/pet/{name}/home'(post).responses.400.content is unexpected
..........
But if I call in CLI version my yaml generate Models and API
Why it call that Exceptions?
Unfortunatelly, I can not add here even a part of my yaml, because site told me "ït looks you add a lot of code"
If I add
skipValidateSpec = true
I get only API without Models. Why?

Get only part of a JSON when using an API on NodeMCU

I am using http.get() to get a JSON from an API I am using, but it's not getting the data. I have the suspicion that this JSON is too big for the NodeMCU. I only need the information in the subpart "stats:". Is it possible to only http.get() that part of the JSON?
EDIT:
This is my code
function getstats()
http.get("https://api.aeon-pool.com/v1/stats_address?address=WmsGUrXTR7sgKmHEqRNLgPLndWKSvjFXcd4soHnaxVjY3aBWW4kncTrRcBJJgUkeGwcHfzuZABk6XK6qAp8VmSci2AyGHcUit", nil, function(code, pool)
if (code < 0) then
print("can't get stats")
else
h = cjson.decode(pool)
hashrate = h[1]["hashrate"]
print(hashrate)
dofile('update_display.lua')
end
end)
end
I also have another function getting data from another api above getstats()
function getaeonrate()
http.get("https://api.coinmarketcap.com/v1/ticker/aeon/?convert=EUR", nil, function(code, dataaeon)
if (code < 0) then
print("can't get aeon")
else
-- Decode JSON data
m = cjson.decode(dataaeon)
-- Extract AEON/EUR price from decoded JSON
aeonrate = string.format("%f", m[1]["price_eur"]);
aeonchange = "24h " .. m[1]["percent_change_24h"] .. "% 7d " .. m[1]["percent_change_7d"] .. "%"
dofile('update_display.lua')
end
end)
end
But now the weird thing is, when I want to access 'pool' from getstats() I get the json data from getaeonrate(). So "hashrate" isn't even in the json because I am getting the json from another function.
I tried making a new project only with getstats() and that doesn't work at all I always get errors like this
HTTP client: Disconnected with error: -9
HTTP client: Connection timeout
HTTP client: Connection timeout
Yesterday I thought that the response was too big from api.aeon-pool.com, I if you look at the json in your webbrowser you can see that the top entry is 'stats:' and I only need that, none of the other stuff. So If the request is to big It would be nice to only http.get() that part of the json, hence my original question. At the moment I am not even sure what is not working correctly, I read that the nodemcu firmware generally had problems with http.get() and that it didn't work correctly for a long time, but getting data from api.coinmarketcap.com works fine in the original project.
The problems with the HTTP module are with near certainty related to https://github.com/nodemcu/nodemcu-firmware/issues/1707 (SSL and HTTP are problematic).
Therefore, I tried with the more bare-bone TLS module on the current master branch. This means you need to manually parse the HTTP response including all headers looking for the JSON content. Besides, you seem to be on an older NodeMCU version as you're still using CJSON - I used SJSON below:
Current NodeMCU master branch
function getstats()
buffer = nil
counter = 0
local srv = tls.createConnection()
srv:on("receive", function(sck, payload)
print("[stats] received data, " .. string.len(payload))
if buffer == nil then
buffer = payload
else
buffer = buffer .. payload
end
counter = counter + 1
-- not getting HTTP content-length header back -> poor man's checking for complete response
if counter == 2 then
print("[stats] done, processing payload")
local beginJsonString = buffer:find("{")
local jsonString = buffer:sub(beginJsonString)
local hashrate = sjson.decode(jsonString)["stats"]["hashrate"]
print("[stats] hashrate from aeon-pool.com: " .. hashrate)
end
end)
srv:on("connection", function(sck, c)
sck:send("GET /v1/stats_address?address=WmsGUrXTR7sgKmHEqRNLgPLndWKSvjFXcd4soHnaxVjY3aBWW4kncTrRcBJJgUkeGwcHfzuZABk6XK6qAp8VmSci2AyGHcUit HTTP/1.1\r\nHost: api.aeon-pool.com\r\nConnection: close\r\nAccept: */*\r\n\r\n")
end)
srv:connect(443, "api.aeon-pool.com")
end
Note that the receive event is fired for every network frame: https://nodemcu.readthedocs.io/en/latest/en/modules/net/#netsocketon
NodeMCU fails to establish a connection to api.coinmarketcap.com due to a TLS handshake failure. Not sure why that is. Otherwise your getaeonrate() could be implemented likewise.
Frozen 1.5.4 branch
With the old branch the net module can connect to coinmarketcap.com.
function getaeonrate()
local srv = net.createConnection(net.TCP, 1)
srv:on("receive", function(sck, payload)
print("[aeon rate] received data, " .. string.len(payload))
local beginJsonString = payload:find("%[")
local jsonString = payload:sub(beginJsonString)
local json = cjson.decode(jsonString)
local aeonrate = string.format("%f", json[1]["price_eur"]);
local aeonchange = "24h " .. json[1]["percent_change_24h"] .. "% 7d " .. json[1]["percent_change_7d"] .. "%"
print("[aeon rate] aeonrate from coinmarketcap.com: " .. aeonrate)
print("[aeon rate] aeonchange from coinmarketcap.com: " .. aeonchange)
end)
srv:on("connection", function(sck, c)
sck:send("GET /v1/ticker/aeon/?convert=EUR HTTP/1.1\r\nHost: api.coinmarketcap.com\r\nConnection: close\r\nAccept: */*\r\n\r\n")
end)
srv:connect(443, "api.coinmarketcap.com")
end
Conclusion
The HTTP module and TLS seem a no-go for your APIs due to a bug in the firmware (1707).
The net/TLS module of the current master branch manages to connect to api.aeon-pool.com but not to api.coinmarketcap.com.
With the old and frozen 1.5.4 branch it's exactly the other way around.
There may (also) be issues with cipher suits that don't match between the firmware and the API provider(s).
-> :( no fun like that

How can we provide multiple values for a Single argument either in services.conf or comands.conf

Here I am trying to use a plugin to check whether the service running or not, if there is any warning or any critical action required, at the same time the performance parameter.
We have used below plugin to check if a server is alive or not and read it's performance data JSON
https://github.com/drewkerrigan/nagios-http-json
I am trying to read a JSON file as below which is hosted on http://localhost:8080/sample.json
The plugin works perfectly on Command line, it shows me all the Metrics available.
$:/usr/lib/nagios/plugins$ ./check_http_json.py -H localhost:8080 -p sample.json -m metrics.etp_count metrics.atc_count
OK: Status OK.|'metrics.etp_count'=101 'metrics.atc_count'=0
But when I try the same in Icinga2 configuration, it doesn't show me this performance metrics, although it doesn't give any error but at the same time it don't show any value.
find the JSON, Command.conf and Service.conf as follows.
{
"metrics": {
"etp_count": "0",
"atc_count": "101",
"mean_time": -1.0,
}
}
Below are my commands.conf and services.conf
commands.conf
/* Json Read Command */
object CheckCommand "json_check"{
import "plugin-check-command"
command = [PluginDir + "/check_http_json.py"]
arguments = {
"-H" = "$server_port$"
"-p" = "$json_path$"
"-w" = "$warning_value$"
"-c" = "$critical_value$"
"-m" = "$Metrics1$,$Metrics2$"
}
}
services.conf
apply Service "json"{
import "generic-service"
check_command = "json_check"
vars.server_port="localhost:8080"
vars.json_path="sample.json"
vars.warning_value="metrics.etp_count,1:100"
vars.critical_value="metrics.etp_count,101:1000"
vars.Metrics1="metrics.etp_count"
vars.Metrics2="metrics.atc_count"
assign where host.name == NodeName
}
Does any one have any idea how can we pass multiple values in Command.conf and Service.conf??
I have resolved the issue.
I had to change the Plugin file "check_http_json.py" for below code
def checkMetrics(self):
"""Return a Nagios specific performance metrics string given keys and parameter definitions"""
metrics = ''
warning = ''
critical = ''
if self.rules.metric_list != None:
for metric in self.rules.metric_list:
Replaced With
def checkMetrics(self):
"""Return a Nagios specific performance metrics string given keys and parameter definitions"""
metrics = ''
warning = ''
critical = ''
if self.rules.metric_list != None:
for metric in self.rules.metric_list[0].split():
Actually the issue was the list was not handled properly, so it was not able to iterate through the items in the list, it was considering it as a single string due to services.config file.
it had to be further get split to get the items in the Metrics string.

nix-shell --command `stack build` leads to libpq-fe.h: No such file or directory

i am trying to compile my small project (a yesod application with lambdacms) on nixos. However, after using cabal2nix (more precisely cabal2nix project-karma.cabal --sha256=0 --shell > shell.nix) , I am still missing a dependency wrt. postgresql it seems.
My shell.nix file looks like this:
{ nixpkgs ? import <nixpkgs> {}, compiler ? "default" }:
let
inherit (nixpkgs) pkgs;
f = { mkDerivation, aeson, base, bytestring, classy-prelude
, classy-prelude-conduit, classy-prelude-yesod, conduit, containers
, data-default, directory, fast-logger, file-embed, filepath
, hjsmin, hspec, http-conduit, lambdacms-core, monad-control
, monad-logger, persistent, persistent-postgresql
, persistent-template, random, resourcet, safe, shakespeare, stdenv
, template-haskell, text, time, transformers, unordered-containers
, uuid, vector, wai, wai-extra, wai-logger, warp, yaml, yesod
, yesod-auth, yesod-core, yesod-form, yesod-static, yesod-test
}:
mkDerivation {
pname = "karma";
version = "0.0.0";
sha256 = "0";
isLibrary = true;
isExecutable = true;
libraryHaskellDepends = [
aeson base bytestring classy-prelude classy-prelude-conduit
classy-prelude-yesod conduit containers data-default directory
fast-logger file-embed filepath hjsmin http-conduit lambdacms- core
monad-control monad-logger persistent persistent-postgresql
persistent-template random safe shakespeare template-haskell text
time unordered-containers uuid vector wai wai-extra wai-logger warp
yaml yesod yesod-auth yesod-core yesod-form yesod-static
nixpkgs.zlib
nixpkgs.postgresql
nixpkgs.libpqxx
];
libraryPkgconfigDepends = [ persistent-postgresql];
executableHaskellDepends = [ base ];
testHaskellDepends = [
base classy-prelude classy-prelude-yesod hspec monad-logger
persistent persistent-postgresql resourcet shakespeare transformers
yesod yesod-core yesod-test
];
license = stdenv.lib.licenses.bsd3;
};
haskellPackages = if compiler == "default"
then pkgs.haskellPackages
else pkgs.haskell.packages.${compiler};
drv = haskellPackages.callPackage f {};
in
if pkgs.lib.inNixShell then drv.env else drv
The output is as follows:
markus#nixos ~/git/haskell/karma/karma (git)-[master] % nix-shell --command `stack build`
postgresql-libpq-0.9.1.1: configure
ReadArgs-1.2.2: download
postgresql-libpq-0.9.1.1: build
ReadArgs-1.2.2: configure
ReadArgs-1.2.2: build
ReadArgs-1.2.2: install
-- While building package postgresql-libpq-0.9.1.1 using:
/run/user/1000/stack31042/postgresql-libpq-0.9.1.1/.stack-work/dist/x86_64-linux/Cabal-1.22.4.0/setup/setup --builddir=.stack-work/dist/x86_64-linux/Cabal-1.22.4.0/ build --ghc-options " -ddump-hi -ddump-to-file"
Process exited with code: ExitFailure 1
Logs have been written to: /home/markus/git/haskell/karma/karma/.stack-work/logs/postgresql-libpq-0.9.1.1.log
[1 of 1] Compiling Main ( /run/user/1000/stack31042/postgresql-libpq-0.9.1.1/Setup.hs, /run/user/1000/stack31042/postgresql-libpq-0.9.1.1/.stack-work/dist/x86_64-linux/Cabal-1.22.4.0/setup/Main.o )
Linking /run/user/1000/stack31042/postgresql-libpq-0.9.1.1/.stack-work/dist/x86_64-linux/Cabal-1.22.4.0/setup/setup ...
Configuring postgresql-libpq-0.9.1.1...
Building postgresql-libpq-0.9.1.1...
Preprocessing library postgresql-libpq-0.9.1.1...
LibPQ.hsc:213:22: fatal error: libpq-fe.h: No such file or directory
compilation terminated.
compiling .stack-work/dist/x86_64-linux/Cabal-1.22.4.0/build/Database/PostgreSQL/LibPQ_hsc_make.c failed (exit code 1)
command was: /nix/store/9fbfiij3ajnd3fs1zyc2qy0ispbszrr7-gcc-wrapper-4.9.3/bin/gcc -c .stack-work/dist/x86_64-linux/Cabal-1.22.4.0/build/Database/PostgreSQL/LibPQ_hsc_make.c -o .stack-work/dist/x86_64-linux/Cabal-1.22.4.0/build/Database/PostgreSQL/LibPQ_hsc_make.o -fno-stack-protector -D__GLASGOW_HASKELL__=710 -Dlinux_BUILD_OS=1 -Dx86_64_BUILD_ARCH=1 -Dlinux_HOST_OS=1 -Dx86_64_HOST_ARCH=1 -I/run/current-system/sw/include -Icbits -I.stack-work/dist/x86_64-linux/Cabal-1.22.4.0/build/autogen -include .stack-work/dist/x86_64-linux/Cabal-1.22.4.0/build/autogen/cabal_macros.h -I/nix/store/xphvly2zcd6jsc2xklz1zmmz4y0dh3ny-ghc-7.10.2/lib/ghc-7.10.2/bytes_6elQVSg5cWdFrvRnfxTUrH/include -I/nix/store/xphvly2zcd6jsc2xklz1zmmz4y0dh3ny-ghc-7.10.2/lib/ghc-7.10.2/base_GDytRqRVSUX7zckgKqJjgw/include -I/nix/store/6ykqcjxr74l642kv9gf1ib8v9yjsgxr9-gmp-5.1.3/include -I/nix/store/xphvly2zcd6jsc2xklz1zmmz4y0dh3ny-ghc-7.10.2/lib/ghc-7.10.2/integ_2aU3IZNMF9a7mQ0OzsZ0dS/include -I/nix/store/xphvly2zcd6jsc2xklz1zmmz4y0dh3ny-ghc-7.10.2/lib/ghc-7.10.2/include -I/nix/store/xphvly2zcd6jsc2xklz1zmmz4y0dh3ny-ghc-7.10.2/lib/ghc-7.10.2/include/
I assume not much is missing, so a pointer would be nice.
What is also weird, that is that "nix-shell" works but following that up with "stack exec yesod devel" tells me
Resolving dependencies...
Configuring karma-0.0.0...
cabal: At least the following dependencies are missing:
classy-prelude >=0.10.2,
classy-prelude-conduit >=0.10.2,
classy-prelude-yesod >=0.10.2,
hjsmin ==0.1.*,
http-conduit ==2.1.*,
lambdacms-core >=0.3.0.2 && <0.4,
monad-logger ==0.3.*,
persistent >=2.0 && <2.3,
persistent-postgresql >=2.1.1 && <2.3,
persistent-template >=2.0 && <2.3,
uuid >=1.3,
wai-extra ==3.0.*,
warp >=3.0 && <3.2,
yesod >=1.4.1 && <1.5,
yesod-auth >=1.4.0 && <1.5,
yesod-core >=1.4.6 && <1.5,
yesod-form >=1.4.0 && <1.5,
yesod-static >=1.4.0.3 && <1.6
When using mysql instead, I am getting
pcre-light-0.4.0.4: configure
mysql-0.1.1.8: configure
mysql-0.1.1.8: build
Progress: 2/59
-- While building package mysql-0.1.1.8 using:
/run/user/1000/stack12820/mysql-0.1.1.8/.stack-work/dist/x86_64-linux/Cabal-1.22.4.0/setup/setup --builddir=.stack-work/dist/x86_64- linux/Cabal-1.22.4.0/ build --ghc-options " -ddump-hi -ddump-to-file"
Process exited with code: ExitFailure 1
Logs have been written to: /home/markus/git/haskell/karma/karma/.stack-work/logs/mysql-0.1.1.8.log
[1 of 1] Compiling Main ( /run/user/1000/stack12820/mysql-0.1.1.8/Setup.lhs, /run/user/1000/stack12820/mysql-0.1.1.8/.stack-work/dist/x86_64-linux/Cabal-1.22.4.0/setup/Main.o )
Linking /run/user/1000/stack12820/mysql-0.1.1.8/.stack-work/dist/x86_64-linux/Cabal-1.22.4.0/setup/setup ...
Configuring mysql-0.1.1.8...
Building mysql-0.1.1.8...
Preprocessing library mysql-0.1.1.8...
In file included from C.hsc:68:0:
include/mysql_signals.h:9:19: fatal error: mysql.h: No such file or directory
#include "mysql.h"
^
compilation terminated.
compiling .stack-work/dist/x86_64-linux/Cabal-1.22.4.0/build/Database/MySQL/Base/C_hsc_make.c failed (exit code 1)
command was: /nix/store/9fbfiij3ajnd3fs1zyc2qy0ispbszrr7-gcc-wrapper-4.9.3/bin/gcc -c .stack-work/dist/x86_64-linux/Cabal-1.22.4.0/build/Database/MySQL/Base/C_hsc_make.c -o .stack-work/dist/x86_64-linux/Cabal-1.22.4.0/build/Database/MySQL/Base/C_hsc_make.o -fno-stack-protector -D__GLASGOW_HASKELL__=710 -Dlinux_BUILD_OS=1 -Dx86_64_BUILD_ARCH=1 -Dlinux_HOST_OS=1 -Dx86_64_HOST_ARCH=1 -I/nix/store/7ppa4k2drrvjk94rb60c1df9nvw0z696-mariadb-10.0.22-lib/include -I/nix/store/7ppa4k2drrvjk94rb60c1df9nvw0z696-mariadb-10.0.22-lib/include/.. -Iinclude -I.stack-work/dist/x86_64-linux/Cabal-1.22.4.0/build/autogen -include .stack-work/dist/x86_64-linux/Cabal-1.22.4.0/build/autogen/cabal_macros.h -I/nix/store/xphvly2zcd6jsc2xklz1zmmz4y0dh3ny-ghc-7.10.2/lib/ghc-7.10.2/bytes_6elQVSg5cWdFrvRnfxTUrH/include -I/nix/store/xphvly2zcd6jsc2xklz1zmmz4y0dh3ny-ghc-7.10.2/lib/ghc-7.10.2/base_GDytRqRVSUX7zckgKqJjgw/include -I/nix/store/6ykqcjxr74l642kv9gf1ib8v9yjsgxr9-gmp-5.1.3/include -I/nix/store/xphvly2zcd6jsc2xklz1zmmz4y0dh3ny-ghc-7.10.2/lib/ghc-7.10.2/integ_2aU3IZNMF9a7mQ0OzsZ0dS/include -I/nix/store/xphvly2zcd6jsc2xklz1zmmz4y0dh3ny-ghc-7.10.2/lib/ghc-7.10.2/include -I/nix/store/xphvly2zcd6jsc2xklz1zmmz4y0dh3ny-ghc-7.10.2/lib/ghc-7.10.2/include/
-- While building package pcre-light-0.4.0.4 using:
/home/markus/.stack/setup-exe-cache/setup-Simple-Cabal-1.22.4.0-x86_64-linux-ghc-7.10.2 --builddir=.stack-work/dist/x86_64-linux/Cabal-1.22.4.0/ configure --with-ghc=/run/current-system/sw/bin/ghc --user --package-db=clear --package-db=global --package-db=/home/markus/.stack/snapshots/x86_64-linux/nightly-2015-11-17/7.10.2/pkgdb/ --libdir=/home/markus/.stack/snapshots/x86_64-linux/nightly-2015-11-17/7.10.2/lib --bindir=/home/markus/.stack/snapshots/x86_64-linux/nightly-2015-11-17/7.10.2/bin --datadir=/home/markus/.stack/snapshots/x86_64-linux/nightly-2015-11-17/7.10.2/share --libexecdir=/home/markus/.stack/snapshots/x86_64-linux/nightly-2015-11-17/7.10.2/libexec --sysconfdir=/home/markus/.stack/snapshots/x86_64-linux/nightly-2015-11-17/7.10.2/etc --docdir=/home/markus/.stack/snapshots/x86_64-linux/nightly-2015-11-17/7.10.2/doc/pcre-light-0.4.0.4 --htmldir=/home/markus/.stack/snapshots/x86_64-linux/nightly-2015-11-17/7.10.2/doc/pcre-light-0.4.0.4 --haddockdir=/home/markus/.stack/snapshots/x86_64-linux/nightly-2015-11-17/7.10.2/doc/pcre-light-0.4.0.4 --dependency=base=base-4.8.1.0-4f7206fd964c629946bb89db72c80011 --dependency=bytestring=bytestring-0.10.6.0-18c05887c1aaac7adb3350f6a4c6c8ed
Process exited with code: ExitFailure 1
Logs have been written to: /home/markus/git/haskell/karma/karma/.stack-work/logs/pcre-light-0.4.0.4.log
Configuring pcre-light-0.4.0.4...
setup-Simple-Cabal-1.22.4.0-x86_64-linux-ghc-7.10.2: The program 'pkg-config'
version >=0.9.0 is required but it could not be found.
After adding pkgconfig to my global configuration, the build seems to get a little further ahead, so it seems that shell.nix is ignored somewhat.
(Sources for what I tried so far:
https://groups.google.com/forum/#!topic/haskell-stack/_ZBh01VP_fo)
Update: It seems like I overlooked this section of the manual
http://nixos.org/nixpkgs/manual/#using-stack-together-with-nix
However, the first idea that came to mind
(stack --extra-lib-dirs=/nix/store/c6qy7n5wdwl164lnzha7vpc3av9yhnga-postgresql-libpq-0.9.1.1/lib build)
did not work yet, most likely I need to use
--extra-include-dirs or try one of the variations. It seems weird that stack is still trying to build postgresql-libpq in the very same version, though.
Update2: Currently trying out "stack --extra-lib-dirs=/nix/store/1xf77x47d0m23nbda0azvkvj8w8y77c7-postgresql-9.4.5/lib --extra-include-dirs=/nix/store/1xf77x47d0m23nbda0azvkvj8w8y77c7-postgresql-9.4.5/include build" which looks promising. Does not look like the nix-way, but still.
Update3: Still getting
<command line>: can't load .so/.DLL for: /home/markus /.stack/snapshots/x86_64-linux/nightly-2015-11-17/7.10.2/lib/x86_64-linux- ghc-7.10.2/postgresql-libpq-0.9.1.1-ABGs5p1J8FbEwi6uvHaiV6/libHSpostgresql-libpq-0.9.1.1-ABGs5p1J8FbEwi6uvHaiV6-ghc7.10.2.so
(libpq.so.5: cannot open shared object file: No such file or directory) stack build 186.99s user 2.93s system 109% cpu 2:52.76 total
which is strange since libpq.so.5 is contained in /nix/store/1xf77x47d0m23nbda0azvkvj8w8y77c7-postgresql-9.4.5/lib.
An additional
$LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/nix/store/1xf77x47d0m23nbda0azvkvj8w8y77c7-postgresql-9.4.5/lib
does not help either.
Update4:
By the way, yesod devel does the same as stack exec yesod devel. My libraries are downloaded to /nix/store but they are not recognized.
Maybe I need to make "build-nix" work and yesod devel does not work here?
Just for completeness, here is stack.yaml
resolver: nightly-2015-11-17
#run stack setup otherwise!!
# Local packages, usually specified by relative directory name
packages:
- '.'
# Packages to be pulled from upstream that are not in the resolver (e.g., acme-missiles-0.3)
extra-deps: [lambdacms-core-0.3.0.2 , friendly-time-0.4, lists-0.4.2, list-extras-0.4.1.4 ]
# Override default flag values for local packages and extra-deps
flags:
karma:
library-only: false
dev: false
# Extra package databases containing global packages
extra-package-dbs: []
Next weekend, I will check out
https://pr06lefs.wordpress.com/2014/09/27/compiling-a-yesod-project-on-nixos/
and other search results.
Funny, because I've just had a similar problem myself - solved it by adding these two lines to stack.yaml:
extra-include-dirs: [/nix/store/jrdvjvf0w9nclw7b4k0pdfkljw78ijgk-postgresql-9.4.5/include/]
extra-lib-dirs: [/nix/store/jrdvjvf0w9nclw7b4k0pdfkljw78ijgk-postgresql-9.4.5/lib/]
You may want to check first which postgresql's path from the /nix/store you should use with include/ and lib/:
nix-build --no-out-link "<nixpkgs>" -A postgresql
And BTW, why do you use nix-shell if you are going to use stack and you have project-karma.cabal available..? Have you considered migrating your project with stack init..?
Looks like stack is trying to build haskellPackages.postgresql-libpq outside of the nix framework.
You probably don't want that to happen. Maybe try to add postgresql-libpq to libraryHaskellDepends?

How do I tell my dancer app to serialize objects in its cache?

I'm using a CHI interface to memcached (or File in devel) in my Dancer app, but I'm getting an error in the serializer when I cache an object. I have the following in my dancer config:
engines:
JSON:
allow_blessed: 1
convert_blessed: 1
What else do I need?
Error message:
Error while loading bin/app.pl: encountered object 'C3M::CMF=HASH(0x3ef8aa8)', but neither allow_blessed nor convert_blessed settings are enabled at /usr/lib/perl5/site_perl/5.10/CHI/Serializer/JSON.pm line 19.
CHI::Serializer::JSON doesn't use the same serializer as Dancer::Serializer::JSON. Dancer::Serializer::JSON uses setting('engines') in config.yml, but there's no way to send configuration options to CHI::Serializer::JSON.
workaround:
use CHI::Serializer::JSON;
my $JSON = JSON->new->utf8->canonical;
$JSON->allow_blessed(1);
$JSON->convert_blessed(1);
*CHI::Serializer::JSON::serialize = sub { $JSON->encode( $_[1] ) };
*CHI::Serializer::JSON::deserialize = sub { $JSON->decode( $_[1] ) };