I need to eject a floppy from QEmu 3.0 monitor, but the command surprisingly fails complaining the device is not found, while it is really there.
Listing of devices:
(qemu) info block
fda: dos-6-22/Dos622-1.img (raw)
Attached to: /machine/unattached/device[11]
Removable device: not locked, tray closed
Cache mode: writeback
hda: hda.img (raw)
Attached to: /machine/peripheral-anon/device[1]
Cache mode: writeback
Eject command result:
(qemu) eject fda
Device 'fda' not found
This is so although this documentation says this is how I have to do: https://www.linux-kvm.org/page/Change_cdrom (just that I want to eject the floppy instead of the CD‑ROM).
The change command complains the same:
(qemu) change fda dos-6-22/Dos622-2.img raw
Device 'fda' not found
Is this a bug or me doing something wrong?
I tried using different node names, with always the same result.
Update:
I’m pretty sure there is no correct answer and it’s rather a bug, which I just submitted: https://bugs.launchpad.net/qemu/+bug/1799766.
I’m posting as an answer, but I’m not strictly sure. I can just say, if I understand correctly, this is a bug.
The answer comes in two parts.
First part, is a stripped down failing invocation:
qemu-system-i386 \
-monitor stdio \
-machine type=isapc,vmport=off \
-blockdev driver=file,node-name=fda-img,filename=fda.img \
-blockdev driver=raw,node-name=fda,file=fda-img \
-global isa-fdc.driveA=fda
(qemu) info block
ide1-cd0: [not inserted]
Attached to: /machine/unattached/device[19]
Removable device: not locked, tray closed
sd0: [not inserted]
Removable device: not locked, tray closed
fda: fda.img (raw)
Attached to: /machine/unattached/device[13]
Removable device: not locked, tray closed
Cache mode: writeback
(qemu) eject fda
Device 'fda' not found
Second part, is the same without the last argument -global isa-fdc.driveA=fda:
qemu-system-i386 \
-monitor stdio \
-machine type=isapc,vmport=off \
-blockdev driver=file,node-name=fda-img,filename=fda.img \
-blockdev driver=raw,node-name=fda,file=fda-img
(qemu) info block
ide1-cd0: [not inserted]
Attached to: /machine/unattached/device[19]
Removable device: not locked, tray closed
floppy0: [not inserted]
Attached to: /machine/unattached/device[13]
Removable device: not locked, tray closed
sd0: [not inserted]
Removable device: not locked, tray closed
(qemu) eject floppy0
There is more error when -global isa-fdc.driveA=fda is removed. However, the documentation says:
-global driver=driver,property=property,value=value
Set default value of driver’s property prop to value, e.g.:
qemu-system-i386 -global ide-hd.physical_block_size=4096 disk-image.img
In particular, you can use this to set driver properties for devices which are created automatically by the machine model. To create a device which is not created automatically and set properties on it, use -device.
-global driver.prop=value is shorthand for -global driver=driver,property=prop,value=value. The longhand syntax works even when driver contains a dot.
What I put a stress on in the quote, suggest I’m not misusing -global and that’s most probably a bug.
Update for more details:
It seems using -drive instead of -device and driveA assignment, the result is not the same, although RedHat documentation recommands using -device instead of -drive and QEmu 3.0 documentation says -drive is essentially a shortcut for -device (“essentially”, not telling about the difference).
Below, two cases, with an except of info block and an excerpt of info qtree.
With this one, eject floppy0 works:
qemu-system-i386 \
-monitor stdio \
-machine type=isapc,vmport=off \
-drive format=raw,if=floppy,media=disk,file=fda.img \
-device isa-vga,vgamem_mb=1 \
-serial msmouse
[…]
floppy0 (#block156): fda.img (raw)
Attached to: /machine/unattached/device[12]
Removable device: not locked, tray closed
Cache mode: writeback
[…]
dev: isa-fdc, id ""
iobase = 1008 (0x3f0)
irq = 6 (0x6)
dma = 2 (0x2)
driveA = ""
driveB = ""
check_media_rate = true
fdtypeA = "auto"
fdtypeB = "auto"
fallback = "288"
isa irq 6
bus: floppy-bus.0
type floppy-bus
dev: floppy, id ""
unit = 0 (0x0)
drive = "floppy0"
logical_block_size = 512 (0x200)
physical_block_size = 512 (0x200)
min_io_size = 0 (0x0)
opt_io_size = 0 (0x0)
discard_granularity = 4294967295 (0xffffffff)
write-cache = "auto"
share-rw = false
drive-type = "144"
With this one, eject fda does not work:
qemu-system-i386 \
-monitor stdio \
-machine type=isapc,vmport=off \
-blockdev driver=file,node-name=fda-img,filename=fda.img \
-blockdev driver=raw,node-name=fda,file=fda-img \
-global isa-fdc.driveA=fda \
-device isa-vga,vgamem_mb=1 \
-serial msmouse
[…]
fda: fda.img (raw)
Attached to: /machine/unattached/device[12]
Removable device: not locked, tray closed
Cache mode: writeback
[…]
dev: isa-fdc, id ""
iobase = 1008 (0x3f0)
irq = 6 (0x6)
dma = 2 (0x2)
driveA = ""
driveB = ""
check_media_rate = true
fdtypeA = "auto"
fdtypeB = "auto"
fallback = "288"
isa irq 6
bus: floppy-bus.0
type floppy-bus
dev: floppy, id ""
unit = 0 (0x0)
drive = "fda"
logical_block_size = 512 (0x200)
physical_block_size = 512 (0x200)
min_io_size = 0 (0x0)
opt_io_size = 0 (0x0)
discard_granularity = 4294967295 (0xffffffff)
write-cache = "auto"
share-rw = false
drive-type = "144"
Related
My goal is to use BLEU as early stopping metric while training a translation model in FairSeq.
Following the documentation, I am adding the following arguments to my training script:
--eval-bleu --eval-bleu-args --eval-bleu-detok --eval-bleu-remove-bpe
I am getting the following error:
fairseq-train: error: unrecognized arguments: --eval-bleu --eval-bleu-args --eval-bleu-detok --eval-bleu-remove-bpe
System information:
fairseq version: 0.10.2
torch: 1.10.1+cu113
More Details:
When I am trying to finetune M2M100 model, I am getting error as:
KeyError: 'bleu'
when using following:
CUDA_VISIBLE_DEVICES=0,1,2,3 fairseq-train \
$path_2_data --ddp-backend=no_c10d \
--best-checkpoint-metric bleu \
--maximize-best-checkpoint-metric \
--max-tokens 2048 --no-epoch-checkpoints \
--finetune-from-model $pretrained_model \
--save-dir $checkpoint --task translation_multi_simple_epoch \
--encoder-normalize-before \
--langs 'af,am,ar,ast,az,ba,be,bg,bn,br,bs,ca,ceb,cs,cy,da,de,el,en,es,et,fa,ff,fi,fr,fy,ga,gd,gl,gu,ha,he,hi,hr,ht,hu,hy,id,ig,ilo,is,it,ja,jv,ka,kk,km,kn,ko,lb,lg,ln,lo,lt,lv,mg,mk,ml,mn,mr,ms,my,ne,nl,no,ns,oc,or,pa,pl,ps,pt,ro,ru,sd,si,sk,sl,so,sq,sr,ss,su,sv,sw,ta,th,tl,tn,tr,uk,ur,uz,vi,wo,xh,yi,yo,zh,zu' \
--lang-pairs $lang_pairs \
--decoder-normalize-before --sampling-method temperature \
--sampling-temperature 1.5 --encoder-langtok src \
--decoder-langtok --criterion label_smoothed_cross_entropy \
--label-smoothing 0.2 --optimizer adam --adam-eps 1e-06
--adam-betas '(0.9, 0.98)' --lr-scheduler inverse_sqrt \
--lr 3e-05 --warmup-updates 2500 --max-update 400000 \
--dropout 0.3 --attention-dropout 0.1 \
--weight-decay 0.0 --update-freq 2 --save-interval 1 \
--save-interval-updates 5000 --keep-interval-updates 10 \
--seed 222 --log-format simple --log-interval 2 --patience 5 \
--arch transformer_wmt_en_de_big --encoder-layers 24 \
--decoder-layers 24 --encoder-ffn-embed-dim 8192 \
--decoder-ffn-embed-dim 8192 --encoder-layerdrop 0.05 \
--decoder-layerdrop 0.05 --share-decoder-input-output-embed \
--share-all-embeddings --fixed-dictionary $fix_dict --fp16 \
--skip-invalid-size-inputs-valid-test
The task that you are using translation_multi_simple_epoch does not have these arguments; they are specific for translation task.
Note that some of the arguments that you are using require values.
--eval-bleu-args expects a path to a configuration JSON for SacreBLEU. If you want to you the default 4-gram BLEU, you should skip this.
--eval-bleu-detok expects a specification of how you want to detokenize the model output. The default value is space which does not do anything.
For more details, see the documentation of the translation task in FairSeq.
I compiled Chromium as debug build
# Set build arguments here. See `gn help buildargs`.
enable_nacl=false
symbol_level=2
is_asan = true
is_lsan = false
is_debug = true
but somehow I cannot see TCMalloc symbols when using TCMalloc inspection script:
https://github.com/marcinguy/tcmalloc-inspector
With sample program compiled against TCMalloc it works.
In Chromium it cannot find these symbols:
# tcmalloc
pageheap_ = gdb.parse_and_eval('\'tcmalloc::Static::pageheap_\'')
central_cache_ = gdb.parse_and_eval('\'tcmalloc::Static::central_cache_\'')
thread_heaps_ = gdb.parse_and_eval('\'tcmalloc::ThreadCache::thread_heaps_\'')
sizemap_ = gdb.parse_and_eval('\'tcmalloc::Static::sizemap_\'') # XXX cache
spantype = gdb.lookup_type('tcmalloc::Span').pointer()
knumclasses = gdb.parse_and_eval('kNumClasses') # XXX skip 0?
kmaxpages = gdb.parse_and_eval('kMaxPages')
pagesize = 1 << int(gdb.parse_and_eval('kPageShift'))
How can I compile Chromium, TCMalloc used by Chromium with debug symbols, those symbols?
Thanks,
I have a Python2 program that runs qemu with a FreeBSD image.
expect()ing lines out output works.
However, expect()ing output that does not have its line terminated (such as when waiting for a prompt like login:) does not, this times out.
I suspect something in the communication between qemu and my program is doing line buffering, but how do I find out which of them it is? Candidates that I can think of:
FreeBSD itself. I find that unlikely, it shows prompts when running interactively, and qemu's -nographics options shouldn't make a difference for the emulated VM (but I may be wrong).
Something in the setup of the pty. I have zero experience with ptys. If that's the issue, this would be a bug in pexpect since pexpect is setting the pty up.
A bug in pexpect.
Something in my own script... but I have no clue what that could be.
For reference, here's the stripped-down code (including download and unpack, should anybody want to play with it):
#! /usr/bin/env python2
import os
import pexpect
import re
import sys
import time
def run(cmd):
'''Run command, log to stdout, no timeout, return the status code.'''
print('run: ' + cmd)
(output, rc) = pexpect.run(
cmd,
withexitstatus=1,
encoding='utf-8',
logfile=sys.stdout,
timeout=None
)
if rc != 0:
print('simple.py: Command failed with return code: ' + rc)
exit(rc)
download_path = 'https://download.freebsd.org/ftp/releases/VM-IMAGES/12.0-RELEASE/amd64/Latest'
image_file = 'FreeBSD-12.0-RELEASE-amd64.qcow2'
image_file_xz = image_file + '.xz'
if not os.path.isfile(image_file_xz):
run('curl -o %s %s/%s' % (image_file_xz, download_path, image_file_xz))
if not os.path.isfile(image_file):
# Reset image file to initial state
run('xz --decompress --keep --force --verbose ' + image_file_xz)
#cmd = 'qemu-system-x86_64 -snapshot -monitor none -display curses -chardev stdio,id=char0 ' + image_file
cmd = 'qemu-system-x86_64 -snapshot -nographic ' + image_file
print('interact with: ' + cmd)
child = pexpect.spawn(
cmd,
timeout=90, # FreeBSD takes roughly 60 seconds to boot
maxread=1,
)
child.logfile = sys.stdout
def expect(pattern):
result = child.expect([pexpect.TIMEOUT, pattern])
if result == 0:
print("timeout: %d reached when waiting for: %s" % (child.timeout, pattern))
exit(1)
return result - 1
if False:
# This does not work: the prompt is not visible, then timeout
expect('login: ')
else:
# Workaround, tested to work:
expect(re.escape('FreeBSD/amd64 (freebsd)')) # Line before prompt
time.sleep(1) # MUCH longer than actually needed, just to be safe
child.sendline('root')
# This will always time out, and terminate the script
expect('# ')
print('We want to get here but cannot')
I am using mosquitto version 1.4.10 with tls-certificate. I am using this plugin https://github.com/mbachry/mosquitto_pyauth to authorize a user.And it works well for mosquitto_pub ( as in, when someone tries to publish , it gets authorized by the module first ).
However, it seems that mosquitto_sub is able to subscribe to anything without authorizing. How do I force security when someone is just trying to access a topic in read only mode?
I went through the mosquitto.conf file and cant seem to find anything related to this.
for example, I am able to subscribe like this:
mosquitto_sub --cafile /etc/mosquitto/ca.crt --cert /etc/mosquitto/client.crt --key /etc/mosquitto/client.key -h ubuntu -p 1883 -t c/# -d
and able to see messages coming from some publisher like this:
mosquitto_pub --cafile /etc/mosquitto/ca.crt --cert /etc/mosquitto/client.crt --key /etc/mosquitto/client.key -h ubuntu -p 1883 -t c/2/b/3/p/3/rt/13/r/123 -m 32 -q 1
What I am trying to do is prevent mosquitto_sub reading all messages at root level without authorization .
the python code that does the authorization looks like this : ( auth data is stored in cassandra db )
import sys
import mosquitto_auth
from cassandra.cluster import Cluster
from cassandra import ConsistencyLevel
## program entry point from mosquitto...
def plugin_init(opts):
global cluster, session, select_device_query
conf = dict(opts)
cluster = Cluster(['192.168.56.102'])
session = cluster.connect('hub')
select_device_query = session.prepare('SELECT * from devices where uid=?')
select_device_query.consistency_level = ConsistencyLevel.QUORUM
print 'Cassandra cluster initialized'
def acl_check(clientid, username, topic, access):
device_data = session.execute(select_device_query, [username])
if device_data.current_rows.__len__() > 0:
device_data = device_data[0]
# sample device data looks like this :
# Row(uid=u'08:00:27:aa:8f:91', brand=3, company=2, device=15617, property=3, room=490, room_number=u'3511', room_type=13, stamp=datetime.datetime(2016, 12, 12, 6, 29, 54, 723000))
subscribable_topic = 'c/' + str(device_data.company) \
+ '/b/' + str(device_data.brand) \
+ '/p/' + str(device_data.property) \
+ '/rt/' + str(device_data.room_type) \
+ '/r/' + str(device_data.room) \
+ '/#'
matches = mosquitto_auth.topic_matches_sub(subscribable_topic, topic)
print 'ACL: user=%s topic=%s, matches = %s' % (username, topic, matches)
return matches
return False
function acl_check seems to be always called when mosquitto_pub tries to connect, but never called when mosquitto_sub connects.
the C code behind this python module is here: https://github.com/mbachry/mosquitto_pyauth/blob/master/auth_plugin_pyauth.c
add the following to your mosquitto.conf
...
allow_anonymous false
...
This will stop users without credential from logging on to the broker.
You can also add an acl rule for the anonymous user if there are certain topics you would want unauthenticated clients to be able to see.
i am trying to compile my small project (a yesod application with lambdacms) on nixos. However, after using cabal2nix (more precisely cabal2nix project-karma.cabal --sha256=0 --shell > shell.nix) , I am still missing a dependency wrt. postgresql it seems.
My shell.nix file looks like this:
{ nixpkgs ? import <nixpkgs> {}, compiler ? "default" }:
let
inherit (nixpkgs) pkgs;
f = { mkDerivation, aeson, base, bytestring, classy-prelude
, classy-prelude-conduit, classy-prelude-yesod, conduit, containers
, data-default, directory, fast-logger, file-embed, filepath
, hjsmin, hspec, http-conduit, lambdacms-core, monad-control
, monad-logger, persistent, persistent-postgresql
, persistent-template, random, resourcet, safe, shakespeare, stdenv
, template-haskell, text, time, transformers, unordered-containers
, uuid, vector, wai, wai-extra, wai-logger, warp, yaml, yesod
, yesod-auth, yesod-core, yesod-form, yesod-static, yesod-test
}:
mkDerivation {
pname = "karma";
version = "0.0.0";
sha256 = "0";
isLibrary = true;
isExecutable = true;
libraryHaskellDepends = [
aeson base bytestring classy-prelude classy-prelude-conduit
classy-prelude-yesod conduit containers data-default directory
fast-logger file-embed filepath hjsmin http-conduit lambdacms- core
monad-control monad-logger persistent persistent-postgresql
persistent-template random safe shakespeare template-haskell text
time unordered-containers uuid vector wai wai-extra wai-logger warp
yaml yesod yesod-auth yesod-core yesod-form yesod-static
nixpkgs.zlib
nixpkgs.postgresql
nixpkgs.libpqxx
];
libraryPkgconfigDepends = [ persistent-postgresql];
executableHaskellDepends = [ base ];
testHaskellDepends = [
base classy-prelude classy-prelude-yesod hspec monad-logger
persistent persistent-postgresql resourcet shakespeare transformers
yesod yesod-core yesod-test
];
license = stdenv.lib.licenses.bsd3;
};
haskellPackages = if compiler == "default"
then pkgs.haskellPackages
else pkgs.haskell.packages.${compiler};
drv = haskellPackages.callPackage f {};
in
if pkgs.lib.inNixShell then drv.env else drv
The output is as follows:
markus#nixos ~/git/haskell/karma/karma (git)-[master] % nix-shell --command `stack build`
postgresql-libpq-0.9.1.1: configure
ReadArgs-1.2.2: download
postgresql-libpq-0.9.1.1: build
ReadArgs-1.2.2: configure
ReadArgs-1.2.2: build
ReadArgs-1.2.2: install
-- While building package postgresql-libpq-0.9.1.1 using:
/run/user/1000/stack31042/postgresql-libpq-0.9.1.1/.stack-work/dist/x86_64-linux/Cabal-1.22.4.0/setup/setup --builddir=.stack-work/dist/x86_64-linux/Cabal-1.22.4.0/ build --ghc-options " -ddump-hi -ddump-to-file"
Process exited with code: ExitFailure 1
Logs have been written to: /home/markus/git/haskell/karma/karma/.stack-work/logs/postgresql-libpq-0.9.1.1.log
[1 of 1] Compiling Main ( /run/user/1000/stack31042/postgresql-libpq-0.9.1.1/Setup.hs, /run/user/1000/stack31042/postgresql-libpq-0.9.1.1/.stack-work/dist/x86_64-linux/Cabal-1.22.4.0/setup/Main.o )
Linking /run/user/1000/stack31042/postgresql-libpq-0.9.1.1/.stack-work/dist/x86_64-linux/Cabal-1.22.4.0/setup/setup ...
Configuring postgresql-libpq-0.9.1.1...
Building postgresql-libpq-0.9.1.1...
Preprocessing library postgresql-libpq-0.9.1.1...
LibPQ.hsc:213:22: fatal error: libpq-fe.h: No such file or directory
compilation terminated.
compiling .stack-work/dist/x86_64-linux/Cabal-1.22.4.0/build/Database/PostgreSQL/LibPQ_hsc_make.c failed (exit code 1)
command was: /nix/store/9fbfiij3ajnd3fs1zyc2qy0ispbszrr7-gcc-wrapper-4.9.3/bin/gcc -c .stack-work/dist/x86_64-linux/Cabal-1.22.4.0/build/Database/PostgreSQL/LibPQ_hsc_make.c -o .stack-work/dist/x86_64-linux/Cabal-1.22.4.0/build/Database/PostgreSQL/LibPQ_hsc_make.o -fno-stack-protector -D__GLASGOW_HASKELL__=710 -Dlinux_BUILD_OS=1 -Dx86_64_BUILD_ARCH=1 -Dlinux_HOST_OS=1 -Dx86_64_HOST_ARCH=1 -I/run/current-system/sw/include -Icbits -I.stack-work/dist/x86_64-linux/Cabal-1.22.4.0/build/autogen -include .stack-work/dist/x86_64-linux/Cabal-1.22.4.0/build/autogen/cabal_macros.h -I/nix/store/xphvly2zcd6jsc2xklz1zmmz4y0dh3ny-ghc-7.10.2/lib/ghc-7.10.2/bytes_6elQVSg5cWdFrvRnfxTUrH/include -I/nix/store/xphvly2zcd6jsc2xklz1zmmz4y0dh3ny-ghc-7.10.2/lib/ghc-7.10.2/base_GDytRqRVSUX7zckgKqJjgw/include -I/nix/store/6ykqcjxr74l642kv9gf1ib8v9yjsgxr9-gmp-5.1.3/include -I/nix/store/xphvly2zcd6jsc2xklz1zmmz4y0dh3ny-ghc-7.10.2/lib/ghc-7.10.2/integ_2aU3IZNMF9a7mQ0OzsZ0dS/include -I/nix/store/xphvly2zcd6jsc2xklz1zmmz4y0dh3ny-ghc-7.10.2/lib/ghc-7.10.2/include -I/nix/store/xphvly2zcd6jsc2xklz1zmmz4y0dh3ny-ghc-7.10.2/lib/ghc-7.10.2/include/
I assume not much is missing, so a pointer would be nice.
What is also weird, that is that "nix-shell" works but following that up with "stack exec yesod devel" tells me
Resolving dependencies...
Configuring karma-0.0.0...
cabal: At least the following dependencies are missing:
classy-prelude >=0.10.2,
classy-prelude-conduit >=0.10.2,
classy-prelude-yesod >=0.10.2,
hjsmin ==0.1.*,
http-conduit ==2.1.*,
lambdacms-core >=0.3.0.2 && <0.4,
monad-logger ==0.3.*,
persistent >=2.0 && <2.3,
persistent-postgresql >=2.1.1 && <2.3,
persistent-template >=2.0 && <2.3,
uuid >=1.3,
wai-extra ==3.0.*,
warp >=3.0 && <3.2,
yesod >=1.4.1 && <1.5,
yesod-auth >=1.4.0 && <1.5,
yesod-core >=1.4.6 && <1.5,
yesod-form >=1.4.0 && <1.5,
yesod-static >=1.4.0.3 && <1.6
When using mysql instead, I am getting
pcre-light-0.4.0.4: configure
mysql-0.1.1.8: configure
mysql-0.1.1.8: build
Progress: 2/59
-- While building package mysql-0.1.1.8 using:
/run/user/1000/stack12820/mysql-0.1.1.8/.stack-work/dist/x86_64-linux/Cabal-1.22.4.0/setup/setup --builddir=.stack-work/dist/x86_64- linux/Cabal-1.22.4.0/ build --ghc-options " -ddump-hi -ddump-to-file"
Process exited with code: ExitFailure 1
Logs have been written to: /home/markus/git/haskell/karma/karma/.stack-work/logs/mysql-0.1.1.8.log
[1 of 1] Compiling Main ( /run/user/1000/stack12820/mysql-0.1.1.8/Setup.lhs, /run/user/1000/stack12820/mysql-0.1.1.8/.stack-work/dist/x86_64-linux/Cabal-1.22.4.0/setup/Main.o )
Linking /run/user/1000/stack12820/mysql-0.1.1.8/.stack-work/dist/x86_64-linux/Cabal-1.22.4.0/setup/setup ...
Configuring mysql-0.1.1.8...
Building mysql-0.1.1.8...
Preprocessing library mysql-0.1.1.8...
In file included from C.hsc:68:0:
include/mysql_signals.h:9:19: fatal error: mysql.h: No such file or directory
#include "mysql.h"
^
compilation terminated.
compiling .stack-work/dist/x86_64-linux/Cabal-1.22.4.0/build/Database/MySQL/Base/C_hsc_make.c failed (exit code 1)
command was: /nix/store/9fbfiij3ajnd3fs1zyc2qy0ispbszrr7-gcc-wrapper-4.9.3/bin/gcc -c .stack-work/dist/x86_64-linux/Cabal-1.22.4.0/build/Database/MySQL/Base/C_hsc_make.c -o .stack-work/dist/x86_64-linux/Cabal-1.22.4.0/build/Database/MySQL/Base/C_hsc_make.o -fno-stack-protector -D__GLASGOW_HASKELL__=710 -Dlinux_BUILD_OS=1 -Dx86_64_BUILD_ARCH=1 -Dlinux_HOST_OS=1 -Dx86_64_HOST_ARCH=1 -I/nix/store/7ppa4k2drrvjk94rb60c1df9nvw0z696-mariadb-10.0.22-lib/include -I/nix/store/7ppa4k2drrvjk94rb60c1df9nvw0z696-mariadb-10.0.22-lib/include/.. -Iinclude -I.stack-work/dist/x86_64-linux/Cabal-1.22.4.0/build/autogen -include .stack-work/dist/x86_64-linux/Cabal-1.22.4.0/build/autogen/cabal_macros.h -I/nix/store/xphvly2zcd6jsc2xklz1zmmz4y0dh3ny-ghc-7.10.2/lib/ghc-7.10.2/bytes_6elQVSg5cWdFrvRnfxTUrH/include -I/nix/store/xphvly2zcd6jsc2xklz1zmmz4y0dh3ny-ghc-7.10.2/lib/ghc-7.10.2/base_GDytRqRVSUX7zckgKqJjgw/include -I/nix/store/6ykqcjxr74l642kv9gf1ib8v9yjsgxr9-gmp-5.1.3/include -I/nix/store/xphvly2zcd6jsc2xklz1zmmz4y0dh3ny-ghc-7.10.2/lib/ghc-7.10.2/integ_2aU3IZNMF9a7mQ0OzsZ0dS/include -I/nix/store/xphvly2zcd6jsc2xklz1zmmz4y0dh3ny-ghc-7.10.2/lib/ghc-7.10.2/include -I/nix/store/xphvly2zcd6jsc2xklz1zmmz4y0dh3ny-ghc-7.10.2/lib/ghc-7.10.2/include/
-- While building package pcre-light-0.4.0.4 using:
/home/markus/.stack/setup-exe-cache/setup-Simple-Cabal-1.22.4.0-x86_64-linux-ghc-7.10.2 --builddir=.stack-work/dist/x86_64-linux/Cabal-1.22.4.0/ configure --with-ghc=/run/current-system/sw/bin/ghc --user --package-db=clear --package-db=global --package-db=/home/markus/.stack/snapshots/x86_64-linux/nightly-2015-11-17/7.10.2/pkgdb/ --libdir=/home/markus/.stack/snapshots/x86_64-linux/nightly-2015-11-17/7.10.2/lib --bindir=/home/markus/.stack/snapshots/x86_64-linux/nightly-2015-11-17/7.10.2/bin --datadir=/home/markus/.stack/snapshots/x86_64-linux/nightly-2015-11-17/7.10.2/share --libexecdir=/home/markus/.stack/snapshots/x86_64-linux/nightly-2015-11-17/7.10.2/libexec --sysconfdir=/home/markus/.stack/snapshots/x86_64-linux/nightly-2015-11-17/7.10.2/etc --docdir=/home/markus/.stack/snapshots/x86_64-linux/nightly-2015-11-17/7.10.2/doc/pcre-light-0.4.0.4 --htmldir=/home/markus/.stack/snapshots/x86_64-linux/nightly-2015-11-17/7.10.2/doc/pcre-light-0.4.0.4 --haddockdir=/home/markus/.stack/snapshots/x86_64-linux/nightly-2015-11-17/7.10.2/doc/pcre-light-0.4.0.4 --dependency=base=base-4.8.1.0-4f7206fd964c629946bb89db72c80011 --dependency=bytestring=bytestring-0.10.6.0-18c05887c1aaac7adb3350f6a4c6c8ed
Process exited with code: ExitFailure 1
Logs have been written to: /home/markus/git/haskell/karma/karma/.stack-work/logs/pcre-light-0.4.0.4.log
Configuring pcre-light-0.4.0.4...
setup-Simple-Cabal-1.22.4.0-x86_64-linux-ghc-7.10.2: The program 'pkg-config'
version >=0.9.0 is required but it could not be found.
After adding pkgconfig to my global configuration, the build seems to get a little further ahead, so it seems that shell.nix is ignored somewhat.
(Sources for what I tried so far:
https://groups.google.com/forum/#!topic/haskell-stack/_ZBh01VP_fo)
Update: It seems like I overlooked this section of the manual
http://nixos.org/nixpkgs/manual/#using-stack-together-with-nix
However, the first idea that came to mind
(stack --extra-lib-dirs=/nix/store/c6qy7n5wdwl164lnzha7vpc3av9yhnga-postgresql-libpq-0.9.1.1/lib build)
did not work yet, most likely I need to use
--extra-include-dirs or try one of the variations. It seems weird that stack is still trying to build postgresql-libpq in the very same version, though.
Update2: Currently trying out "stack --extra-lib-dirs=/nix/store/1xf77x47d0m23nbda0azvkvj8w8y77c7-postgresql-9.4.5/lib --extra-include-dirs=/nix/store/1xf77x47d0m23nbda0azvkvj8w8y77c7-postgresql-9.4.5/include build" which looks promising. Does not look like the nix-way, but still.
Update3: Still getting
<command line>: can't load .so/.DLL for: /home/markus /.stack/snapshots/x86_64-linux/nightly-2015-11-17/7.10.2/lib/x86_64-linux- ghc-7.10.2/postgresql-libpq-0.9.1.1-ABGs5p1J8FbEwi6uvHaiV6/libHSpostgresql-libpq-0.9.1.1-ABGs5p1J8FbEwi6uvHaiV6-ghc7.10.2.so
(libpq.so.5: cannot open shared object file: No such file or directory) stack build 186.99s user 2.93s system 109% cpu 2:52.76 total
which is strange since libpq.so.5 is contained in /nix/store/1xf77x47d0m23nbda0azvkvj8w8y77c7-postgresql-9.4.5/lib.
An additional
$LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/nix/store/1xf77x47d0m23nbda0azvkvj8w8y77c7-postgresql-9.4.5/lib
does not help either.
Update4:
By the way, yesod devel does the same as stack exec yesod devel. My libraries are downloaded to /nix/store but they are not recognized.
Maybe I need to make "build-nix" work and yesod devel does not work here?
Just for completeness, here is stack.yaml
resolver: nightly-2015-11-17
#run stack setup otherwise!!
# Local packages, usually specified by relative directory name
packages:
- '.'
# Packages to be pulled from upstream that are not in the resolver (e.g., acme-missiles-0.3)
extra-deps: [lambdacms-core-0.3.0.2 , friendly-time-0.4, lists-0.4.2, list-extras-0.4.1.4 ]
# Override default flag values for local packages and extra-deps
flags:
karma:
library-only: false
dev: false
# Extra package databases containing global packages
extra-package-dbs: []
Next weekend, I will check out
https://pr06lefs.wordpress.com/2014/09/27/compiling-a-yesod-project-on-nixos/
and other search results.
Funny, because I've just had a similar problem myself - solved it by adding these two lines to stack.yaml:
extra-include-dirs: [/nix/store/jrdvjvf0w9nclw7b4k0pdfkljw78ijgk-postgresql-9.4.5/include/]
extra-lib-dirs: [/nix/store/jrdvjvf0w9nclw7b4k0pdfkljw78ijgk-postgresql-9.4.5/lib/]
You may want to check first which postgresql's path from the /nix/store you should use with include/ and lib/:
nix-build --no-out-link "<nixpkgs>" -A postgresql
And BTW, why do you use nix-shell if you are going to use stack and you have project-karma.cabal available..? Have you considered migrating your project with stack init..?
Looks like stack is trying to build haskellPackages.postgresql-libpq outside of the nix framework.
You probably don't want that to happen. Maybe try to add postgresql-libpq to libraryHaskellDepends?