Missing build file during sbt run - chisel

I added "--backend" and "v" to my chiselMainTest list, and although I am getting verilog output, I am also getting a build error:
In file included from ./vpi.cpp:1:
./vpi.h:4:10: fatal error: 'vpi_user.h' file not found
#include "vpi_user.h"
^
1 error generated.
A complete listing of the sbt run follows:
BigKiss:chisel mykland$ sbt run
[info] Set current project to chisel (in build file:/Users/mykland/work/chisel/)
[info] Compiling 1 Scala source to /Users/mykland/work/chisel/target/scala-2.10/classes...
[warn] there were 38 feature warning(s); re-run with -feature for details
[warn] one warning found
[info] Running mainStub
[info] [0.056] // COMPILING < (class lut3to1_1)>(0)
[info] [0.078] giving names
[info] [0.088] executing custom transforms
[info] [0.089] adding clocks and resets
[info] [0.093] inferring widths
[info] [0.108] checking widths
[info] [0.110] lowering complex nodes to primitives
[info] [0.113] removing type nodes
[info] [0.117] compiling 84 nodes
[info] [0.117] computing memory ports
[info] [0.117] resolving nodes to the components
[info] [0.133] creating clock domains
[info] [0.134] pruning unconnected IOs
[info] [0.136] checking for combinational loops
[info] [0.139] NO COMBINATIONAL LOOP FOUND
[info] [0.149] COMPILING <lut3to1_1 (class lut3to1_1)> 0 CHILDREN (0,0)
In file included from ./vpi.cpp:1:
./vpi.h:4:10: fatal error: 'vpi_user.h' file not found
#include "vpi_user.h"
^
1 error generated.
[info] [0.666] g++ -c -o ./vpi.o -I$VCS_HOME/include -I./ -fPIC -std=c++11 ./vpi.cpp RET 1
[error] lut3to1_1.scala:58: failed to compile vpi.cpp in class mainStub$
Re-running Chisel in debug mode to obtain erroneous line numbers...
[info] [0.030] // COMPILING < (class lut3to1_1)>(0)
[info] [0.035] giving names
[info] [0.037] executing custom transforms
[info] [0.037] adding clocks and resets
[info] [0.038] inferring widths
[info] [0.045] checking widths
[info] [0.046] lowering complex nodes to primitives
[info] [0.047] removing type nodes
[info] [0.049] compiling 84 nodes
[info] [0.049] computing memory ports
[info] [0.049] resolving nodes to the components
[info] [0.055] creating clock domains
[info] [0.055] pruning unconnected IOs
[info] [0.056] checking for combinational loops
[info] [0.056] NO COMBINATIONAL LOOP FOUND
[info] [0.060] COMPILING <lut3to1_1 (class lut3to1_1)> 0 CHILDREN (0,0)
In file included from ./vpi.cpp:1:
./vpi.h:4:10: fatal error: 'vpi_user.h' file not found
#include "vpi_user.h"
^
1 error generated.
[info] [0.535] g++ -c -o ./vpi.o -I$VCS_HOME/include -I./ -fPIC -std=c++11 ./vpi.cpp RET 1
[error] lut3to1_1.scala:58: failed to compile vpi.cpp in class mainStub$
[error] (run-main-0) Chisel.ChiselException: failed to compile vpi.cpp
Chisel.ChiselException: failed to compile vpi.cpp
at mainStub$.main(lut3to1_1.scala:58)
[trace] Stack trace suppressed: run last compile:run for the full output.
java.lang.RuntimeException: Nonzero exit code: 1
at scala.sys.package$.error(package.scala:27)
[trace] Stack trace suppressed: run last compile:run for the full output.
[error] (compile:run) Nonzero exit code: 1
[error] Total time: 9 s, completed Oct 4, 2015 6:33:30 PM
BigKiss:chisel mykland$
A complete listing of my source code follows:
import Chisel._
class lut3to1_1 extends Module
{
val io = new Bundle
{
val config = UInt(INPUT, 8)
val a = Bool(INPUT)
val b = Bool(INPUT)
val c = Bool(INPUT)
val out = Bool(OUTPUT)
}
io.out := (io.config(0) & !io.a & !io.b & !io.c) |
(io.config(1) & io.a & !io.b & !io.c) |
(io.config(2) & !io.a & io.b & !io.c) |
(io.config(3) & io.a & io.b & !io.c) |
(io.config(4) & !io.a & !io.b & io.c) |
(io.config(5) & io.a & !io.b & io.c) |
(io.config(6) & !io.a & io.b & io.c) |
(io.config(7) & io.a & io.b & io.c)
}
class lut3to1_1_Tests(c: lut3to1_1) extends Tester(c)
{
for ( config <- 0 to 255 )
{
poke( c.io.config, config )
for ( bits <- 0 to 7 )
{
val bitA = bits & 1
val bitB = (bits >> 1) & 1
val bitC = (bits >> 2) & 1
poke( c.io.a, bitA )
poke( c.io.b, bitB )
poke( c.io.c, bitC )
step( 1 )
val result0 = ~bitA & ~bitB & ~bitC & (config & 1)
val result1 = bitA & ~bitB & ~bitC & ((config >> 1) & 1)
val result2 = ~bitA & bitB & ~bitC & ((config >> 2) & 1)
val result3 = bitA & bitB & ~bitC & ((config >> 3) & 1)
val result4 = ~bitA & ~bitB & bitC & ((config >> 4) & 1)
val result5 = bitA & ~bitB & bitC & ((config >> 5) & 1)
val result6 = ~bitA & bitB & bitC & ((config >> 6) & 1)
val result7 = bitA & bitB & bitC & ((config >> 7) & 1)
val result = result0 | result1 | result2 | result3 |
result4 | result5 | result6 | result7
expect( c.io.out, result )
}
}
}
object mainStub
{
def main( args: Array[String] ): Unit =
{
chiselMainTest( Array[String]("--backend", "c", "--backend", "v",
"--compile", "--test", "--genHarness"), () => Module( new lut3to1_1() ) )
{
c => new lut3to1_1_Tests( c )
}
}
}

The missing header file (vpi_user.h) is related to Verilog simulators VPI support, which is the mechanism that Chisel uses to connect your Tester to the Verilog simulator. The current version of Chisel only supports Synopsys VCS as the Verilog simulation tool. There's experimental support for Icarus Verilog (iverilog) version 10.0+, Verilator, Modelsim and Questasim in my fork of Chisel (available here) . Unfortunately I haven't had time to thoroughly test the changes and make a pull-request to the main repository, but you can try it and see if it works for you.

This command is to generate verilog for testbenches simulator.
If you just want to generate verilog for synthesis, simply add chiselMain() function in your main() like it :
object mainStub
{
def main( args: Array[String] ): Unit =
{
chiselMainTest( Array[String]("--backend", "c",
"--compile", "--test", "--genHarness"), () => Module( new lut3to1_1() ) )
{
c => new lut3to1_1_Tests( c )
}
chiselMain(args, () => Module(new lut3to1_1()))
}
}
You will get a synthetizable verilog file named lut3to1_1.v

Related

Missing file (net.xml) in Running Environment Flow

On tutorial 01 from flow:Tutorial 01.
I executed the code
flow_params = dict(
exp_tag='ring_example',
env_name=AccelEnv,
network=RingNetwork,
simulator='traci',
sim=sim_params,
env=env_params,
net=net_params,
veh=vehicles,
initial=initial_config,
tls=traffic_lights,
)
# number of time steps
flow_params['env'].horizon = 3000
exp = Experiment(flow_params)
# run the sumo simulation
_ = exp.run(1, convert_to_csv=True)
I got an error afterward, here is the error
Error during start: [Errno 2] No such file or directory: '.../kernel/network/debug/cfg/ring_example_20201208-1332481607405568.58399.net.xml' Retrying in 1 seconds...
How should it be generated or where can it be found?
This is an issue with my file naming conversion. Apparently, the command to be called in
subprocess.call(
[
'netconvert -c "' + self.net_path + self.cfgfn +
'" --output-file="' + self.cfg_path + self.netfn +
'" --no-internal-links="false"'
],
stdout=subprocess.DEVNULL,
shell=True)
requires no spacing. In my case, I have my folder named "Machine Learning."

Stanford NER Tagger and NLTK - not working [OSError: Java command failed ]

Trying to run Stanford NER Taggerand NLTK from a jupyter notebook.
I am continuously getting
OSError: Java command failed
I have already tried the hack at
https://gist.github.com/alvations/e1df0ba227e542955a8a
and thread
Stanford Parser and NLTK
I am using
NLTK==3.3
Ubuntu==16.04LTS
Here is my python code:
Sample_text = "Google, headquartered in Mountain View, unveiled the new Android phone"
sentences = sent_tokenize(Sample_text)
tokenized_sentences = [word_tokenize(sentence) for sentence in sentences]
PATH_TO_GZ = '/home/root/english.all.3class.caseless.distsim.crf.ser.gz'
PATH_TO_JAR = '/home/root/stanford-ner.jar'
sn_3class = StanfordNERTagger(PATH_TO_GZ,
path_to_jar=PATH_TO_JAR,
encoding='utf-8')
annotations = [sn_3class.tag(sent) for sent in tokenized_sentences]
I got these files using following commands:
wget http://nlp.stanford.edu/software/stanford-ner-2015-04-20.zip
wget http://nlp.stanford.edu/software/stanford-postagger-full-2015-04-20.zip
wget http://nlp.stanford.edu/software/stanford-parser-full-2015-04-20.zip
# Extract the zip file.
unzip stanford-ner-2015-04-20.zip
unzip stanford-parser-full-2015-04-20.zip
unzip stanford-postagger-full-2015-04-20.zip
I am getting the following error:
CRFClassifier invoked on Thu May 31 15:56:19 IST 2018 with arguments:
-loadClassifier /home/root/english.all.3class.caseless.distsim.crf.ser.gz -textFile /tmp/tmpMDEpL3 -outputFormat slashTags -tokenizerFactory edu.stanford.nlp.process.WhitespaceTokenizer -tokenizerOptions "tokenizeNLs=false" -encoding utf-8
tokenizerFactory=edu.stanford.nlp.process.WhitespaceTokenizer
Unknown property: |tokenizerFactory|
tokenizerOptions="tokenizeNLs=false"
Unknown property: |tokenizerOptions|
loadClassifier=/home/root/english.all.3class.caseless.distsim.crf.ser.gz
encoding=utf-8
Unknown property: |encoding|
textFile=/tmp/tmpMDEpL3
outputFormat=slashTags
Loading classifier from /home/root/english.all.3class.caseless.distsim.crf.ser.gz ... Error deserializing /home/root/english.all.3class.caseless.distsim.crf.ser.gz
Exception in thread "main" java.lang.RuntimeException: java.lang.ClassCastException: java.util.ArrayList cannot be cast to [Ledu.stanford.nlp.util.Index;
at edu.stanford.nlp.ie.AbstractSequenceClassifier.loadClassifierNoExceptions(AbstractSequenceClassifier.java:1380)
at edu.stanford.nlp.ie.AbstractSequenceClassifier.loadClassifierNoExceptions(AbstractSequenceClassifier.java:1331)
at edu.stanford.nlp.ie.crf.CRFClassifier.main(CRFClassifier.java:2315)
Caused by: java.lang.ClassCastException: java.util.ArrayList cannot be cast to [Ledu.stanford.nlp.util.Index;
at edu.stanford.nlp.ie.crf.CRFClassifier.loadClassifier(CRFClassifier.java:2164)
at edu.stanford.nlp.ie.AbstractSequenceClassifier.loadClassifier(AbstractSequenceClassifier.java:1249)
at edu.stanford.nlp.ie.AbstractSequenceClassifier.loadClassifier(AbstractSequenceClassifier.java:1366)
at edu.stanford.nlp.ie.AbstractSequenceClassifier.loadClassifierNoExceptions(AbstractSequenceClassifier.java:1377)
... 2 more
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
<ipython-input-15-5621d0f8177d> in <module>()
----> 1 ne_annot_sent_3c = [sn_3class.tag(sent) for sent in tokenized_sentences]
/home/root1/.virtualenv/demos/local/lib/python2.7/site-packages/nltk/tag/stanford.pyc in tag(self, tokens)
79 def tag(self, tokens):
80 # This function should return list of tuple rather than list of list
---> 81 return sum(self.tag_sents([tokens]), [])
82
83 def tag_sents(self, sentences):
/home/root1/.virtualenv/demos/local/lib/python2.7/site-packages/nltk/tag/stanford.pyc in tag_sents(self, sentences)
102 # Run the tagger and get the output
103 stanpos_output, _stderr = java(cmd, classpath=self._stanford_jar,
--> 104 stdout=PIPE, stderr=PIPE)
105 stanpos_output = stanpos_output.decode(encoding)
106
/home/root1/.virtualenv/demos/local/lib/python2.7/site-packages/nltk/__init__.pyc in java(cmd, classpath, stdin, stdout, stderr, blocking)
134 if p.returncode != 0:
135 print(_decode_stdoutdata(stderr))
--> 136 raise OSError('Java command failed : ' + str(cmd))
137
138 return (stdout, stderr)
OSError: Java command failed : [u'/usr/bin/java', '-mx1000m', '-cp', '/home/root/stanford-ner.jar', 'edu.stanford.nlp.ie.crf.CRFClassifier', '-loadClassifier', '/home/root/english.all.3class.caseless.distsim.crf.ser.gz', '-textFile', '/tmp/tmpMDEpL3', '-outputFormat', 'slashTags', '-tokenizerFactory', 'edu.stanford.nlp.process.WhitespaceTokenizer', '-tokenizerOptions', '"tokenizeNLs=false"', '-encoding', 'utf-8']
Download Stanford Named Entity Recognizer version 3.9.1: see ‘Download’ section from The Stanford NLP website.
Unzip it and move 2 files "ner-tagger.jar" and "english.all.3class.distsim.crf.ser.gz" to your folder
Open jupyter notebook or ipython prompt in your folder path and run the following python code:
import nltk
from nltk.tag.stanford import StanfordNERTagger
sentence = u"Twenty miles east of Reno, Nev., " \
"where packs of wild mustangs roam free through " \
"the parched landscape, Tesla Gigafactory 1 " \
"sprawls near Interstate 80."
jar = './stanford-ner.jar'
model = './english.all.3class.distsim.crf.ser.gz'
ner_tagger = StanfordNERTagger(model, jar, encoding='utf8')
words = nltk.word_tokenize(sentence)
# Run NER tagger on words
print(ner_tagger.tag(words))
I tested this on NLTK==3.3 and Ubuntu==16.0.6LTS

I try to install caffe on win

I try to install caffe,
https://github.com/BVLC/caffe/tree/opencl
C:\Projects> git clone https://github.com/BVLC/caffe.git
C:\Projects> cd caffe
C:\Projects\caffe> git checkout windows
:: Edit any of the options inside build_win.cmd to suit your needs
C:\Projects\caffe> scripts\build_win.cmd
When I try to build, I get this error,
'MySQL' is not recognized as an internal command
Or external, an executable program or a batch file.
I think the error comes from this file build_win.cmd
Line 117 :: Setup the environement for VS x64
Line 118 set batch_file=!VS%MSVC_VERSION%0COMNTOOLS!..\..\VC\vcvarsall.bat
Line 119 call "%batch_file%" amd64
Here is the PATH
PATH=C:\Server\Python\python353\Scripts\;
C:\Server\Python\python353\;
C:\Users\Snarcraft\AMD APP SDK\2.9-1\bin\x86_64;
C:\Users\Snarcraft\AMD APP SDK\2.9-1\bin\x86;
C:\Program Files (x86)\AMD APP SDK\2.9-1\bin\x86_64;
C:\Program Files (x86)\AMD APP SDK\2.9-1\bin\x86;
C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common;
C:\ProgramData\Oracle\Java\javapath;
C:\WINDOWS\system32;
C:\WINDOWS;
C:\WINDOWS\System32\Wbem;
C:\WINDOWS\System32\WindowsPowerShell\v1.0\;
C:\Program Files (x86)\MySQL\MySQL Fabric 1.5 & MySQL Utilities 1.5\;
C:\Program Files (x86)\MySQL\MySQL Fabric 1.5 & MySQL Utilities 1.5\Doctrine extensions for PHP\;
--> C:\Program Files\MySQL\MySQL Server 5.7\bin;
C:\Server\Apache24\bin;
C:\Server\php\php-5.6.30-Win32-VC11-x64;
C:\ProgramData\ComposerSetup\bin;
C:\Server\Git\cmd;C:\Server\MATLAB\R2016a\runtime\win64;
C:\Server\MATLAB\R2016a\bin;
C:\Server\MATLAB\R2016a\polyspace\bin;
--> C:\Program Files\CMake\bin;
C:\Server\Miniconda3;
C:\Server\Miniconda3\Scripts;
C:\Server\Miniconda3\Library\bin;
C:\Program Files (x86)\Windows Kits\8.1\Windows Performance Toolkit\;
C:\Users\Snarcraft\AppData\Local\Microsoft\WindowsApps;
C:\Users\Snarcraft\AppData\Roaming\npm;
C:\Users\Snarcraft\AppData\Roaming\Composer\vendor\bin;
The content of build_win.cmd
#echo off
#setlocal EnableDelayedExpansion
:: Default values
if DEFINED APPVEYOR (
echo Setting Appveyor defaults
if NOT DEFINED MSVC_VERSION set MSVC_VERSION=14
if NOT DEFINED WITH_NINJA set WITH_NINJA=1
if NOT DEFINED CPU_ONLY set CPU_ONLY=1
if NOT DEFINED CUDA_ARCH_NAME set CUDA_ARCH_NAME=Auto
if NOT DEFINED CMAKE_CONFIG set CMAKE_CONFIG=Release
if NOT DEFINED USE_NCCL set USE_NCCL=0
if NOT DEFINED CMAKE_BUILD_SHARED_LIBS set CMAKE_BUILD_SHARED_LIBS=0
if NOT DEFINED PYTHON_VERSION set PYTHON_VERSION=2
if NOT DEFINED BUILD_PYTHON set BUILD_PYTHON=1
if NOT DEFINED BUILD_PYTHON_LAYER set BUILD_PYTHON_LAYER=1
if NOT DEFINED BUILD_MATLAB set BUILD_MATLAB=1
if NOT DEFINED PYTHON_EXE set PYTHON_EXE=python
if NOT DEFINED RUN_TESTS set RUN_TESTS=1
if NOT DEFINED RUN_LINT set RUN_LINT=1
if NOT DEFINED RUN_INSTALL set RUN_INSTALL=1
:: Set python 2.7 with conda as the default python
if !PYTHON_VERSION! EQU 2 (
set CONDA_ROOT=C:\Server\Miniconda3
)
:: Set python 3.5 with conda as the default python
if !PYTHON_VERSION! EQU 3 (
set CONDA_ROOT=C:\Server\Miniconda3
)
set PATH=!CONDA_ROOT!;!CONDA_ROOT!\Scripts;!CONDA_ROOT!\Library\bin;!PATH!
:: Check that we have the right python version
!PYTHON_EXE! --version
:: Add the required channels
conda config --add channels conda-forge
conda config --add channels willyd
:: Update conda
conda update conda -y
:: Download other required packages
conda install --yes cmake ninja numpy scipy protobuf==3.1.0 six scikit-image pyyaml pydotplus graphviz
if ERRORLEVEL 1 (
echo ERROR: Conda update or install failed
exit /b 1
)
:: Install cuda and disable tests if needed
if !WITH_CUDA! == 1 (
call %~dp0\appveyor\appveyor_install_cuda.cmd
set RUN_TESTS=0
set USE_NCCL=1
) else (
set CPU_ONLY=1
)
:: Disable the tests in debug config
if "%CMAKE_CONFIG%" == "Debug" (
echo Disabling tests on appveyor with config == %CMAKE_CONFIG%
set RUN_TESTS=0
)
:: Disable linting with python 3 until we find why the script fails
if !PYTHON_VERSION! EQU 3 (
set RUN_LINT=0
)
) else (
:: Change the settings here to match your setup
:: Change MSVC_VERSION to 12 to use VS 2013
if NOT DEFINED MSVC_VERSION set MSVC_VERSION=14
:: Change to 1 to use Ninja generator (builds much faster)
if NOT DEFINED WITH_NINJA set WITH_NINJA=0
:: Change to 1 to build caffe without CUDA support
if NOT DEFINED CPU_ONLY set CPU_ONLY=0
:: Change to generate CUDA code for one of the following GPU architectures
:: [Fermi Kepler Maxwell Pascal All]
if NOT DEFINED CUDA_ARCH_NAME set CUDA_ARCH_NAME=Auto
:: Change to Debug to build Debug. This is only relevant for the Ninja generator the Visual Studio generator will generate both Debug and Release configs
if NOT DEFINED CMAKE_CONFIG set CMAKE_CONFIG=Release
:: Set to 1 to use NCCL
if NOT DEFINED USE_NCCL set USE_NCCL=0
:: Change to 1 to build a caffe.dll
if NOT DEFINED CMAKE_BUILD_SHARED_LIBS set CMAKE_BUILD_SHARED_LIBS=0
:: Change to 3 if using python 3.5 (only 2.7 and 3.5 are supported)
if NOT DEFINED PYTHON_VERSION set PYTHON_VERSION=2
:: Change these options for your needs.
if NOT DEFINED BUILD_PYTHON set BUILD_PYTHON=1
if NOT DEFINED BUILD_PYTHON_LAYER set BUILD_PYTHON_LAYER=1
if NOT DEFINED BUILD_MATLAB set BUILD_MATLAB=1
:: If python is on your path leave this alone
if NOT DEFINED PYTHON_EXE set PYTHON_EXE=python
:: Run the tests
if NOT DEFINED RUN_TESTS set RUN_TESTS=0
:: Run lint
if NOT DEFINED RUN_LINT set RUN_LINT=0
:: Build the install target
if NOT DEFINED RUN_INSTALL set RUN_INSTALL=0
:: Enable CUDA backend
if NOT DEFINED USE_CUDA set USE_CUDA=0
:: Use cuDNN acceleration with CUDA backend
if NOT DEFINED USE_CUDNN set USE_CUDNN=0
:: Use OpenCL backend
if NOT DEFINED USE_GREENTEA set USE_GREENTEA=1
:: Use LibDNN acceleration with OpenCL and/or CUDA backend
if NOT DEFINED USE_LIBDNN set USE_LIBDNN=1
:: Use OpenMP (disable this on systems with #NUMA > 1)
if NOT DEFINED USE_OPENMP set USE_OPENMP=0
:: Use 64 bit indexing for very large memory blob support (above 2G)
if NOT DEFINED USE_INDEX64 set USE_INDEX64=0
:: Use Intel spatial kernels acceleration for forward convolution on Intel iGPUs
if NOT DEFINED USE_INTEL_SPATIAL set USE_INTEL_SPATIAL=0
:: Disable host/device shared memory
if NOT DEFINED DISABLE_DEVICE_HOST_UNIFIED_MEMORY set DISABLE_DEVICE_HOST_UNIFIED_MEMORY=0
)
:: Set the appropriate CMake generator
:: Use the exclamation mark ! below to delay the
:: expansion of CMAKE_GENERATOR
if %WITH_NINJA% EQU 0 (
if "%MSVC_VERSION%"=="14" (
set CMAKE_GENERATOR=Visual Studio 14 2015 Win64
)
if "%MSVC_VERSION%"=="12" (
set CMAKE_GENERATOR=Visual Studio 12 2013 Win64
)
if "!CMAKE_GENERATOR!"=="" (
echo ERROR: Unsupported MSVC version
exit /B 1
)
) else (
set CMAKE_GENERATOR=Ninja
)
echo INFO: ============================================================
echo INFO: Summary:
echo INFO: ============================================================
echo INFO: MSVC_VERSION = !MSVC_VERSION!
echo INFO: WITH_NINJA = !WITH_NINJA!
echo INFO: CMAKE_GENERATOR = "!CMAKE_GENERATOR!"
echo INFO: CPU_ONLY = !CPU_ONLY!
echo INFO: USE_CUDA = !USE_CUDA!
echo INFO: CUDA_ARCH_NAME = !CUDA_ARCH_NAME!
echo INFO: USE_CUDNN = !USE_CUDNN!
echo INFO: USE_GREENTEA = !USE_GREENTEA!
echo INFO: USE_LIBDNN = !USE_LIBDNN!
echo INFO: USE_OPENMP = !USE_OPENMP!
echo INFO: USE_INDEX64 = !USE_INDEX_64!
echo INFO: USE_INTEL_SPATIAL = !USE_INTEL_SPATIAL!
echo INFO: DISABLE_DEVICE_HOST_UNIFIED_MEMORY = !DISABLE_DEVICE_HOST_UNIFIED_MEMORY!
echo INFO: CMAKE_CONFIG = !CMAKE_CONFIG!
echo INFO: USE_NCCL = !USE_NCCL!
echo INFO: CMAKE_BUILD_SHARED_LIBS = !CMAKE_BUILD_SHARED_LIBS!
echo INFO: PYTHON_VERSION = !PYTHON_VERSION!
echo INFO: BUILD_PYTHON = !BUILD_PYTHON!
echo INFO: BUILD_PYTHON_LAYER = !BUILD_PYTHON_LAYER!
echo INFO: BUILD_MATLAB = !BUILD_MATLAB!
echo INFO: PYTHON_EXE = "!PYTHON_EXE!"
echo INFO: RUN_TESTS = !RUN_TESTS!
echo INFO: RUN_LINT = !RUN_LINT!
echo INFO: RUN_INSTALL = !RUN_INSTALL!
echo INFO: ============================================================
:: Build and exectute the tests
:: Do not run the tests with shared library
if !RUN_TESTS! EQU 1 (
if %CMAKE_BUILD_SHARED_LIBS% EQU 1 (
echo WARNING: Disabling tests with shared library build
set RUN_TESTS=0
)
)
if NOT EXIST build mkdir build
pushd build
:: Setup the environement for VS x64
set batch_file=!VS%MSVC_VERSION%0COMNTOOLS!..\..\VC\vcvarsall.bat
call "%batch_file%" amd64
:: Configure using cmake and using the caffe-builder dependencies
:: Add -DCUDNN_ROOT=C:/Projects/caffe/cudnn-8.0-windows10-x64-v5.1/cuda ^
:: below to use cuDNN
cmake -G"!CMAKE_GENERATOR!" ^
-DBLAS=Open ^
-DCMAKE_BUILD_TYPE:STRING=%CMAKE_CONFIG% ^
-DBUILD_SHARED_LIBS:BOOL=%CMAKE_BUILD_SHARED_LIBS% ^
-DBUILD_python:BOOL=%BUILD_PYTHON% ^
-DBUILD_python_layer:BOOL=%BUILD_PYTHON_LAYER% ^
-DBUILD_matlab:BOOL=%BUILD_MATLAB% ^
-DCPU_ONLY:BOOL=%CPU_ONLY% ^
-DUSE_CUDA:BOOL=%USE_CUDA% ^
-DUSE_CUDNN:BOOL=%USE_CUDNN% ^
-DUSE_LIBDNN:BOOL=%USE_LIBDNN% ^
-DUSE_GREENTEA:BOOL=%USE_GREENTEA% ^
-DUSE_OPENMP:BOOL=%USE_OPENMP% ^
-DUSE_INDEX64:BOOL=%USE_INDEX64% ^
-DUSE_INTEL_SPATIAL:BOOL=%USE_INTEL_SPATIAL% ^
-DDISABLE_DEVICE_HOST_UNIFIED_MEMORY=%DISABLE_DEVICE_HOST_UNIFIED_MEMORY% ^
-DCOPY_PREREQUISITES:BOOL=1 ^
-DINSTALL_PREREQUISITES:BOOL=1 ^
-DUSE_NCCL:BOOL=!USE_NCCL! ^
-DCUDA_ARCH_NAME:STRING=%CUDA_ARCH_NAME% ^
"%~dp0\.."
if ERRORLEVEL 1 (
echo ERROR: Configure failed
exit /b 1
)
:: Lint
if %RUN_LINT% EQU 1 (
cmake --build . --target lint --config %CMAKE_CONFIG%
)
if ERRORLEVEL 1 (
echo ERROR: Lint failed
exit /b 1
)
:: Build the library and tools
cmake --build . --config %CMAKE_CONFIG%
if ERRORLEVEL 1 (
echo ERROR: Build failed
exit /b 1
)
:: Build and exectute the tests
if !RUN_TESTS! EQU 1 (
cmake --build . --target runtest --config %CMAKE_CONFIG%
if ERRORLEVEL 1 (
echo ERROR: Tests failed
exit /b 1
)
if %BUILD_PYTHON% EQU 1 (
if %BUILD_PYTHON_LAYER% EQU 1 (
:: Run python tests only in Release build since
:: the _caffe module is _caffe-d is debug
if "%CMAKE_CONFIG%"=="Release" (
:: Run the python tests
cmake --build . --target pytest
if ERRORLEVEL 1 (
echo ERROR: Python tests failed
exit /b 1
)
)
)
)
)
if %RUN_INSTALL% EQU 1 (
cmake --build . --target install --config %CMAKE_CONFIG%
)
popd
#endlocal
it's all

java.utilNoSuchElementException: None.get for Vec

Perhaps I'm going about something the wrong way. I have a number of buffers that need to get locked and unlocked as part of the behavior of a state machine. I thought it would be perfect to use a Vec of Reg to store the state from clock to clock and use a var Vec of wires to accumulate the state as the state machine goes about locking and unlocking things. Here is code similar to the code I wrote that breaks in the same way:
import Chisel._
class testvec extends Module
{
val io = new Bundle
{
val addr = Vec( 5, UInt( INPUT, 4 ) )
val enable = Bool( INPUT )
val in = Vec( 5, UInt( INPUT, 16 ) )
val out = Vec( 16, UInt( OUTPUT, 16 ) )
}
val latch = Vec( 16, Reg( init=UInt(0,16) ) )
var temp = Vec( 16, UInt(0,16) )
for( i <- 0 until 16 )
{
temp(i) := latch(i)
}
for( i <- 0 until 5 )
{
temp(io.addr(i)) := io.in(i)
}
for( i <- 0 until 16 )
{
io.out(i) := temp(i)
}
when( io.enable )
{
for( i <- 0 until 16 )
{
latch(i) := temp(i)
}
}
}
class testvec_Tests(c: testvec) extends Tester(c)
{
step( 1 )
}
object mainStub
{
def main( args: Array[String] ): Unit =
{
chiselMainTest( Array[String]("--backend", "c", // "--backend", "v",
"--compile", "--test", "--genHarness"),
() => Module( new testvec() ) )
{
c => new testvec_Tests( c )
}
}
}
Note that although this code merely has a simple loop, I need to get my combinatorial lock state at various points during the execution of the state machine each clock cycle, so that's why this simplification has those combinatorial states as the final output rather than the registers.
Here's the full text of the error message:
[info] Set current project to chisel
[info] Running mainStub
[error] (run-main-0) java.util.NoSuchElementException: None.get
java.util.NoSuchElementException: None.get
at scala.None$.get(Option.scala:347)
at scala.None$.get(Option.scala:345)
at Chisel.ROMData$$anonfun$3.apply(ROM.scala:90)
at Chisel.ROMData$$anonfun$3.apply(ROM.scala:90)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
at scala.collection.Iterator$class.foreach(Iterator.scala:750)
at scala.collection.immutable.RedBlackTree$TreeIterator.foreach(RedBlackTree.scala:468)
at scala.collection.MapLike$DefaultValuesIterable.foreach(MapLike.scala:206)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:245)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at Chisel.ROMData.<init>(ROM.scala:90)
at Chisel.ROM.data$lzycompute(ROM.scala:72)
at Chisel.ROM.data(ROM.scala:72)
at Chisel.ROM.read(ROM.scala:77)
at Chisel.Vec.apply(Vec.scala:121)
at testvec$$anonfun$2.apply$mcVI$sp(testvec.scala:21)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:166)
at testvec.<init>(testvec.scala:19)
at mainStub$$anonfun$main$1$$anonfun$apply$1.apply(testvec.scala:47)
at mainStub$$anonfun$main$1$$anonfun$apply$1.apply(testvec.scala:47)
at Chisel.Module$.Chisel$Module$$init(Module.scala:65)
at Chisel.Module$.apply(Module.scala:50)
at mainStub$$anonfun$main$1.apply(testvec.scala:47)
at mainStub$$anonfun$main$1.apply(testvec.scala:47)
at Chisel.Driver$.execute(Driver.scala:101)
at Chisel.Driver$.apply(Driver.scala:41)
at Chisel.Driver$.apply(Driver.scala:64)
at Chisel.chiselMain$.apply(hcl.scala:63)
at Chisel.chiselMainTest$.apply(hcl.scala:76)
at mainStub$.main(testvec.scala:48)
at mainStub.main(testvec.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
[trace] Stack trace suppressed: run last compile:run for the full output.
java.lang.RuntimeException: Nonzero exit code: 1
at scala.sys.package$.error(package.scala:27)
[trace] Stack trace suppressed: run last compile:run for the full output.
[error] (compile:run) Nonzero exit code: 1
[error] Total time: 1 s, completed Mar 3, 2016 1:49:17 PM
Are you sure of your "Vec" declaration ?
According to documentation, Vec must be declared as following I think:
val io = new Bundle
{
val addr = Vec.fill(5) {UInt( INPUT, 4 )}
val enable = Bool( INPUT )
val in = Vec.fill( 5 ) {UInt( INPUT, 16 )}
val out = Vec.fill( 16 ) {UInt( OUTPUT, 16 )}
}

Haskell: why is pi-forall getting loaded in a JSON parsing example?

I am trying to learn parsing JSON in Haskell via https://www.fpcomplete.com/school/pick-of-the-week/episode-1-json
When I load the file (shown after this interactive listing) I get:
> ghci
GHCi, version 7.6.3: http://www.haskell.org/ghc/ :? for help
Loading package ghc-prim ... linking ... done.
Loading package integer-gmp ... linking ... done.
Loading package base ... linking ... done.
Prelude> :set -v
hiding package binary-0.5.1.1 to avoid conflict with later version binary-0.7.1.0
hiding package Cabal-1.16.0 to avoid conflict with later version Cabal-1.18.0
hiding package QuickCheck-2.4.2 to avoid conflict with later version QuickCheck-2.6
hiding package syb-0.3.7 to avoid conflict with later version syb-0.4.0
hiding package hakyll-4.2.2.0 to avoid conflict with later version hakyll-4.3.1.0
wired-in package ghc-prim mapped to ghc-prim-0.3.0.0-d5221a8c8a269b66ab9a07bdc23317dd
wired-in package integer-gmp mapped to integer-gmp-0.5.0.0-2f15426f5b53fe4c6490832f9b20d8d7
wired-in package base mapped to base-4.6.0.1-6c351d70a24d3e96f315cba68f3acf57
wired-in package rts mapped to builtin_rts
wired-in package template-haskell mapped to template-haskell-2.8.0.0-c2c1b21dbbb37ace4b7dc26c966ec664
wired-in package dph-seq not found.
wired-in package dph-par not found.
Prelude> :l X03ObjSetsTweetReader
*** Chasing dependencies:
Chasing modules from:
Stable obj: []
Stable BCO: []
unload: retaining objs []
unload: retaining bcos []
Ready for upsweep []
Upsweep completely successful.
*** Deleting temp files:
Deleting:
*** Chasing dependencies:
Chasing modules from: *X03ObjSetsTweetReader.hs
Stable obj: []
Stable BCO: []
unload: retaining objs []
unload: retaining bcos []
Ready for upsweep
[NONREC
ModSummary {
ms_hs_date = 2013-10-02 02:30:39 UTC
ms_mod = main:X03ObjSetsTweetSetTest,
ms_textual_imps = [import (implicit) Prelude, import Control.Monad,
import Control.Applicative, import Data.Monoid, import Data.Aeson,
import Parser (),
import qualified Data.ByteString.Lazy.Char8 as C8,
import qualified Data.ByteString.Lazy as BL]
ms_srcimps = []
}]
*** Deleting temp files:
Deleting:
compile: input file X03ObjSetsTweetReader.hs
Created temporary directory: /var/folders/dw/c7gq7tw9339grqjctgwbmgzm0000gy/T/ghc18739_0
*** Checking old interface for main:X03ObjSetsTweetSetTest:
[1 of 1] Compiling X03ObjSetsTweetSetTest ( X03ObjSetsTweetReader.hs, interpreted )
*** Parser:
*** Renamer/typechecker:
*** Desugar:
Result size of Desugar (after optimization)
= {terms: 974, types: 767, coercions: 0}
*** Simplifier:
Result size of Simplifier iteration=1
= {terms: 966, types: 761, coercions: 45}
Result size of Simplifier = {terms: 966, types: 761, coercions: 45}
*** Tidy Core:
Result size of Tidy Core = {terms: 966, types: 761, coercions: 45}
*** CorePrep:
Result size of CorePrep
= {terms: 1,646, types: 1,390, coercions: 45}
*** ByteCodeGen:
Upsweep completely successful.
*** Deleting temp files:
Deleting: /var/folders/dw/c7gq7tw9339grqjctgwbmgzm0000gy/T/ghc18739_0/ghc18739_0.c /var/folders/dw/c7gq7tw9339grqjctgwbmgzm0000gy/T/ghc18739_0/ghc18739_0.o
Warning: deleting non-existent /var/folders/dw/c7gq7tw9339grqjctgwbmgzm0000gy/T/ghc18739_0/ghc18739_0.c
Warning: deleting non-existent /var/folders/dw/c7gq7tw9339grqjctgwbmgzm0000gy/T/ghc18739_0/ghc18739_0.o
Ok, modules loaded: X03ObjSetsTweetSetTest.
*X03ObjSetsTweetSetTest> main
*** Parser:
*** Desugar:
*** Simplify:
*** CorePrep:
*** ByteCodeGen:
Loading package array-0.4.0.1 ... linking ... done.
Loading package deepseq-1.3.0.1 ... linking ... done.
Loading package primitive-0.5.0.1 ... linking ... done.
Loading package vector-0.10.0.1 ... linking ... done.
Loading package bytestring-0.10.0.2 ... linking ... done.
Loading package text-0.11.3.1 ... linking ... done.
Loading package hashable-1.1.2.5 ... linking ... done.
Loading package unordered-containers-0.2.3.0 ... linking ... done.
Loading package transformers-0.3.0.0 ... linking ... done.
Loading package mtl-2.1.2 ... linking ... done.
Loading package parsec-3.1.3 ... linking ... done.
Loading package containers-0.5.0.0 ... linking ... done.
Loading package attoparsec-0.10.4.0 ... linking ... done.
Loading package pretty-1.1.1.0 ... linking ... done.
Loading package old-locale-1.0.0.5 ... linking ... done.
Loading package time-1.4.0.1 ... linking ... done.
Loading package HUnit-1.2.5.2 ... linking ... done.
Loading package random-1.0.1.1 ... linking ... done.
Loading package template-haskell ... linking ... done.
Loading package QuickCheck-2.6 ... linking ... done.
Loading package type-equality-0.1.2 ... linking ... done.
Loading package RepLib-0.5.3.1 ... linking ... done.
Loading package bimap-0.2.4 ... linking ... done.
Loading package filepath-1.3.0.1 ... linking ... done.
*** gcc:
'/usr/bin/gcc' '-m64' '-fno-stack-protector' '-m64' '-L/Library/Frameworks/GHC.framework/Versions/7.6.3-x86_64/usr/lib/ghc-7.6.3/unix-2.6.0.1' '--print-file-name' 'libdl.dylib'
Loading package unix-2.6.0.1 ... linking ... done.
Loading package directory-1.2.0.1 ... linking ... done.
Loading package unbound-0.4.2 ... linking ... done.
Loading package pi-forall-0.1 ... linking ... <interactive>:
lookupSymbol failed in relocateSection (relocate external)
/Users/carr/Library/Haskell/ghc-7.6.3/lib/pi-forall-0.1/lib/HSpi-forall-0.1.o: unknown symbol `_pizmforallzm0zi1_LayoutToken_makeTokenParser_info'
ghc: unable to load package `pi-forall-0.1'
*X03ObjSetsTweetSetTest>
Why in the heck would it be loading pi-forall ?? That is Stephanie Weirich's demo impl from OPLSS 2013. How can I track down who/what/why this is being loaded?
Here is X03ObjSetsTweetSetTest.hs:
module X03ObjSetsTweetSetTest where
import qualified Data.ByteString.Lazy as BL
import qualified Data.ByteString.Lazy.Char8 as C8
import Parser()
import Data.Aeson
import Data.Monoid
import Control.Applicative
import Control.Monad
data Recipe = Recipe
{ recipeName :: String
, ingredients :: [Ingredient]
, steps :: [Step]
} deriving Show
type Measure = String
data Ingredient = Ingredient
{ ingredientName :: String
, quantity :: Int
, measure :: Maybe Measure
} deriving Show
data Step = Step
{ stepName :: String
, order :: Int
, stepDuration :: Maybe Duration
} deriving (Eq, Show)
instance Ord Step where
compare s1 s2 = compare (order s1) (order s2)
data Duration = Duration
{ duration :: Int
, durationMeasure :: Measure
} deriving (Eq, Show)
-------------------------------------------------------------------------------
instance FromJSON Recipe where
parseJSON (Object r) = Recipe <$>
r .: "name" <*>
r .: "ingredients" <*>
r .: "steps"
parseJSON _ = mzero
instance ToJSON Recipe where
toJSON (Recipe n i s) = object ["name" .= n, "ingredients" .= i, "steps" .= s]
-------------------------------------------------------------------------------
instance FromJSON Step where
parseJSON (Object s) = Step <$>
s .: "step" <*>
s .: "order" <*>
s .:? "duration"
parseJSON _ = mzero
instance ToJSON Step where
toJSON (Step s o d) = object ["step" .= s, "order" .= o, "duration" .= d]
-------------------------------------------------------------------------------
instance FromJSON Ingredient where
parseJSON (Object i) = Ingredient <$>
i .: "name" <*>
i .: "quantity" <*>
i .:? "measure"
parseJSON _ = mzero
instance ToJSON Ingredient where
toJSON (Ingredient n q m) = object ["name" .= n, "quantity" .= q, "measure" .= m]
-------------------------------------------------------------------------------
instance FromJSON Duration where
parseJSON (Object d) = Duration <$>
d .: "duration" <*>
d .: "measure"
parseJSON _ = mzero
instance ToJSON Duration where
toJSON (Duration d m) = object ["duration" .= d, "measure" .= m]
-------------------------------------------------------------------------------
main :: IO ()
main = do
let toParse = C8.unlines $ map C8.pack [
"{ ",
" \"name\": \"Ciambellone Cake\", ",
" \"ingredients\": [ ",
" { ",
" \"name\": \"Flour\", ",
" \"quantity\": 250, ",
" \"measure\": \"gr\" ",
" }, ",
" { ",
" \"name\": \"Sugar\", ",
" \"quantity\": 250, ",
" \"measure\": \"gr\" ",
" }, ",
" { ",
" \"name\": \"Sunflower Oil\", ",
" \"quantity\": 130, ",
" \"measure\": \"ml\" ",
" }, ",
" { ",
" \"name\": \"Water\", ",
" \"quantity\": 130, ",
" \"measure\": \"ml\" ",
" }, ",
" { ",
" \"name\": \"Egg\", ",
" \"quantity\": 3 ",
" }, ",
" { ",
" \"name\": \"Yeast\", ",
" \"quantity\": 1 ",
" } ",
" ], ",
" \"steps\": [ ",
" { ",
" \"step\": \"Mix everything\", ",
" \"order\": 1 ",
" }, ",
" { ",
" \"step\": \"Cook in oven at 200 degrees\", ",
" \"order\": 2, ",
" \"duration\": { ",
" \"duration\": 30, ",
" \"measure\": \"minutes\" ",
" } ",
" } ",
" ] ",
"} "
]
in case (eitherDecode' toParse :: Either String Recipe) of
Right r -> print r
Left e -> C8.putStrLn $ C8.pack e <> " in " <> toParse
See Gabriel's response - cut-n-paste error.