Use chisel to implement a relatively large-scale project, how to check the progress of done elaborating - chisel

I am using chisel to implement a project that needs to call many of the same modules, and I am using a for loop to implement it. There is no problem with the compilation of the project, but it has been in the process of 'Done elaborating'. Using chisel will generate fir files, and then verilog, I want to know where to check the intermediate files in the 'Done elaborating' process.
[info] [0.002] Elaborating design...
[info] [3.443] Done elaborating.

Depending on how you're invoking Chisel, you can turn up the log-level which should print about the progress through FIRRTL. On the command-line, this is done with -ll info.
As for "where" to look for the files, they should all be written to the target directory. The default is your current working directory; it can be set with -td <directory> on the command-line.

Related

Programmatically create gitlab-ci.yml file?

Is there any tool to generate .gitlab-ci.yml file like Jenkins has job-dsl-plugin to create jobs?
Jenkins DSL plugin allows me to generate jobs using Groovy, which outputs an xml that describes a job for Jenkins.
I can use DSL and a json file to generate jobs in Jenkins. What I’m looking for is a tool to help me generate .gitlab-ci.yml based on a specification.
The main question i have to ask what is your goal?
just reduce maintenance effort for repeating job snippets:
Sometimes .gitlab-ci.yml file are pretty similar in a lot of projects, and you want to manage them centrally. Then i recommend to take a look at Having Gitlab Projects calling the same gitlab-ci.yml stored in a central location - which shows multiple ways of centralizing your build,
generate pipeline configuration as the build is highly flexible
Actually this is more a templating task, and can be achieved in nearly every script language you like.
With simple bash, groovy, python, go, .. you name it. In the end the question is, what kind of flexibility you strive for, and what kind of logic you need for the generation. I will not go into the detail on how to generate a the .gitlab-ci.yml file, but how to use it for your next step. Because this is in my opinion the most crucial step. There is the way of simply generating and committing it, but you can also use GitLab CI to generate a file for you, which will be used in the next job of your pipeline.
setup:
script:
- echo ".." # generate your yaml file here, maybe use a custom image
artifacts:
paths:
- generated.gitlab-ci.yml
trigger:
needs:
- setup
trigger:
include:
- artifact: generated.gitlab-ci.yml
job: setup
strategy: depend
This allows you to generate a child pipeline and execute it - we use this for highly generic builds in monorepos.
see for further reading
GitLab JSONNET Example - documentation example for generated yml files within a pipeline
Dynamic Childpipelines - documentation for dynamically created pipelines

ivy publish multi modules - how to continue on publishing others if one fails

I have an ant project with over 100 modules. I cycle through all modules compile, package, and publish in one build run. However, when one ivy:publish fails (due to random connection issue), the entire build exits.
I would like the build process to continue compile/publish the remaining modules even if one module fails to publish for whatever reason.
Is there some settings in ivy:publish to prevent exiting upon error or some other way to achieve this?
thanks
Since you appear to be using ANT to call multiple sub-builds, then I would submit this is a control loop problem rather that something specific to ivy. In other words you are best advised to ensure each module's build is as stand-alone as you can make them and then in your loop each module's build should succeed or fail.
You have not indicated what your main build file looks like? I would high recommend using the subant task, as this has a "failonerror" flag that will give you your desired behaviour (build will continue on if a module fails).
<subant failonerror="true">
<fileset dir="." includes="**/build.xml" excludes="build.xml"/>
<target name="clean"/>
<target name="build"/>
</subant>
This should be enough to solve your problem. Any build that fails can be manually re-run. In practice this might be difficult since one module failing might cause a subsequent build to fail due to missing dependencies..... You need to judge the risks of this for yourself.
You can even further complicate your solution later, by using an embedded script to run module builds. If you have lots and lots of errors you might want to add some bespoke error handling logic.
Move a ant dir project after the ant or subant task completes

How to allow web-component-tester to run tests stored with my components

I am experimenting with the framework to build an SPA using polymer. This will include a large number of custom elements at various levels in the overall application hierarchy. I would like to use web-component-tester to run the module tests on them.
web-component-tester seems to be opinionated about where the tests will be stored - in a separate test directory, where it will run all files found.
I am of an opposite opinion. I would like to store tests in the same directory as the element definition. I would like to differentiate tests by naming them xxx.test.html (or possibly xxx.test.js). I also want to run different "sets" of tests controlled by gulp some of which will be watching for changes and then running the tests (for the app side of my project) and some of which will be elements that use core-ajax to unit test my server side scripts. These will more than likely be in a totally different directory hierarchy (my dist directory) and will be served by a proper web server.
I "think" the "suite" config option wct-conf.js file in my project root might be how I can define this, or alternatively a wct command with some file globs. Unfortunately web-component-tester's README is somewhat confusing on any detail and when you have your own web server it says "You'll need to save WCT's browser.js in order to go this route." What does that mean?
Can someone enlighten me on how can get WCT to run each of the elements/**/*.test.html files as its own "suite" ( I actually intend to use describe, it format - but I assume I still need to use the term suite).
Can someone also explain what I need to do the browser.js when I have my own web server.
I ran some experiments and did a bit of debugging with node-inspector. Firstly, the command line overwrites the suites parameter in the config file
wct app/elements**/*.test.html
does find all my module tests if I have them stored with the elements and ignores the contents of the wct.conf.js file's suites parameter.
also putting the same value (ie app/elements/**/*.test.html) in the wct-conf.js file for the suite parameter does the same job. In fact in this mode, gulp test:local
Also works correctly
So to run different tests for module and distribution, I just need to set up for wct.conf.js for my module tests, and set up gulp to run a command line with the correct location of my test file
I still haven't understood the instructions for running with your own web server.

Building GPL C program with CUDA module

I am attempting to modify a GPL program written in C. My goal is to replace one method with a CUDA implementation, which means I need to compile with nvcc instead of gcc. I need help building the project - not implementing it (You don't need to know anything about CUDA C to help, I don't think).
This is my first time trying to change a C project of moderate complexity that involves a .configure and Makefile. Honestly, this is my first time doing anything in C in a long time, including anything involving gcc or g++, so I'm pretty lost.
I'm not super interested in learning configure and Makefiles - this is more of an experiment. I would like to see if the project implementation goes well before spending time creating a proper build script. (Not unwilling to learn as necessary, just trying to give an idea of the scope).
With that said, what are my options for building this project? I have a myriad of questions...
I tried adding "CC=nvcc" to the configure.in file after AC_PROG_CC. This appeared to work - output from running configure and make showed nvcc as the compiler. However make failed to compile the source file with the CUDA kernel, not recognizing the CUDA specific syntax. I don't know why, was hoping this would just work.
Is it possible to compile a source file with nvcc, and then include it at the linking step in the make process for the main program? If so, how? (This question might not make sense - I'm really rusty at this)
What's the correct way to do this?
Is there a quick and dirty way I could use for testing purposes?
Is there some secret tool everyone uses to setup and understand these configure and Makefiles? This is even worse than the Apache Ant scripts I'm used to (Yeah, I'm out of my realm)
You don't need to compile everything with nvcc. Your guess that you can just compile your CUDA code with NVCC and leave everything else (except linking) is correct. Here's the approach I would use to start.
Add a 1 new header (e.g. myCudaImplementation.h) and 1 new source file (with .cu extension, e.g. myCudaImplementation.cu). The source file contains your kernel implementation as well as a (host) C wrapper function that invokes the kernel with the appropriate execution configuration (aka <<<>>>) and arguments. The header file contains the prototype for the C wrapper function. Let's call that wrapper function runCudaImplementation()
I would also provide another host C function in the source file (with prototype in the header) that queries and configures the GPU devices present and returns true if it is successful, false if not. Let's call this function configureCudaDevice().
Now in your original C code, where you would normally call your CPU implementation you can do this.
// must include your new header
#include "myCudaImplementation.h"
// at app initialization
// store this variable somewhere you can access it later
bool deviceConfigured = configureCudaDevice;
...
// then later, at run time
if (deviceConfigured)
runCudaImplementation();
else
runCpuImplementation(); // run the original code
Now, since you put all your CUDA code in a new .cu file, you only have to compile that file with nvcc. Everything else stays the same, except that you have to link in the object file that nvcc outputs. e.g.
nvcc -c -o myCudaImplementation.o myCudaImplementation.cu <other necessary arguments>
Then add myCudaImplementation.o to your link line (something like:)
g++ -o myApp myCudaImplementation.o
Now, if you have a complex app to work with that uses configure and has a complex makefile already, it may be more involved than the above, but this is the general approach. Bottom line is you don't want to compile all of your source files with nvcc, just the .cu ones. Use your host compiler for everything else.
I'm not expert with configure so can't really help there. You may be able to run configure to generate a makefile, and then edit that makefile -- it won't be a general solution, but it will get you started.
Note that in some cases you may also need to separate compilation of your .cu files from linking them. In this case you need to use NVCC's separate compilation and linking functionality, for which this blog post might be helpful.

Launch interactive OCaml session with library (Yojson) available

I've installed the Yojson library for OCaml via GODI:
http://martin.jambon.free.fr/yojson.html
I want to start an interactive ocaml session (i.e. via the ocaml command) and execute functions from the Yojson library e.g.
Yojson.Safe.from_string;;
How do I do this? The above command gives "Error: Unbound module Yojson". I've worked out how to compile via ocamlc with Yojson available, but I want to launch an interactive session instead.
I know this seems like a horrible beginners question but Yojson comes with no samples and minimal instructions so I'm really stumped. I've tried various combinations of "#load" and compiler switches and I'm stuck.
The tool you are after is called findlib. It is included in the base GODI installation. The tools that come with findlib allow you to easily compile against most OCaml libraries and use those libraries from a toplevel session (ocaml). The findlib documentation is fairly comprehensive, but here is a quick summary to get started.
To start using findlib from within a toplevel session:
#use "topfind";;
This will display a brief usage message. Then you can type:
#list;;
This will show you a list of all of the available packages. Yojson will likely be among them. Finally:
#require "yojson";;
where yojson is replaced by the appropriate entry shown by #list;;. Yojson's modules should be available for you to use at this point.