I wonder if there is a way to get a warning when there is a module which I haven't connected one of its IOs. I'm generating verilog file from my chisel design. And in some cases, I can see that some of my IOs they are getting ripped out. I'm guessing the reason is that I have forgotten to connect them.
Check out chiselTests/InvalidateAPISpec.scala. This is a new Invalidate API that is just being released. If that's not clear enough, I can put something together tomorrow, we haven't created a wiki page for it yet.
As Chick mentions, we've just published a release candidate supporting the new Invalidate API - chisel3-3.0.0-RC1. Code is in the tip of both master and release branches. We've put together a wiki page describing the new API and how to deal with the errors it may trigger.
Related
Note: this is not a dupe of this or this other question. Read on: this question is specific to the Code-Sharing template.
I am doing some pretty basic experiments with NativeScript, Angular and the code sharing templates (see: #nativescript/schematics).
Now I am doing some exploration / poc work on how different "build configuration" are supported by the framework. To be clear, I am searching for a simple -and hopefully official- way to have the application use a different version of a specific file (let's call it configuration.ts) based on the current platform (web/ios/android) and environment (development/production/staging?).
Doing the first part is obviously trivial - after all that is the prime purpose of the code sharing schematics. So, different versions of the same file are identified by different extensions. This page explain things pretty simply.
What I don't get as easily is if the framework/template supports any similar convention-based rule that can be used to switch between debug/release (or even better development/staging/production) versions of a file. Think for example of a config.ts file that contains different parameters based on the environment.
I have done some research in the topic, but I was unable to find a conclusive answer:
the old and now retired documentation for the appbuilder platform mentions a (.debug. and .release.) naming convention for files. I don't think this work anymore.
other sources mention passing parameters during the call to tns build / tns run and then fetching them via webpack env variable... See here. This may work, but seems oddly convoluted
third option that gets mentioned is to use hooks to customize the build (or use a plugin that should do the same)
lastly, for some odd reason, the #nativescript/schematics seems to generate a default project that contains two files called environment.ts and environment.prod.ts. I suspect those only work for the web version of the project (read: ng serve) - I wasn't able to get the mobile compiler to recognize files that end with debug.ts, prod.ts or release.ts
While it may be possible that what I am trying to do isn't just supported (yet?), the general confusion an dissenting opinions on the matter make me think I may be missing something.. somewhere.
In case this IS somehow supported, I also wonder how it may integrate with the NativeScript Sidekick app that is often suggested as a tool to ease the build/run process of NativeScript applications (there is no way to specify additional parameters for the tns commands that the Sidekick automates, the only options available are switching between debug/release mode), but this is probably better to be left for another question.
Environment files are not yet supported, passing environment variables from build command could be the viable solution for now.
But of course, you may write your own schematics if you like immediate support for environment files.
I did not look into sharing environment files between web and mobile yet - I do like Manoj's suggestion regarding modifying the schematics, but I'll have to cross that bridge when I get there I guess. I might have an answer to your second question regarding Sidekick. The latest version does support "Webpack" build option which seems to pass the --bundle parameter to tns. The caveat is that this option seems to be more sensitive to typescript errors, even relatively benign ones, so you have to be careful and make sure to fix them all prior to building. In my case I had to lock the version of #types/jasmine in package.json to "2.8.6" in order to avoid some incompatibility between that and the version of typescript that Sidekick's cloud solution is using. Another hint is to check "Clean Build" after npm dependency changes are made. Good luck!
We have a group of developers moving from C++ to C# and WinRT. We used D'Oxygen as part of our C++ developer builds, and I'd like to continue to have document generation as part of the developer build in C#/WinRT.
It's easy to turn on XML Doc generation, and I believe that will provide warnings for malformed tags, but without actual HTML output, I think our developers will be missing valuable feedback.
Looks like NDoc is now defunct, and I took a quick look at Sandcastle, but found it rather complex. Ideally, I'm looking for something that doesn't unduly burden developers, or require them to remember extra steps as they edit, build, test, and commit. In other words, the best solution would be something that "just happens", like a post-build step, and doesn't add significantly to each developer's build time.
If anyone has had some experience doing this in C#/WinRT, I'd sure like some advice.
Thanks in advance!
Get Sandcastle Help File Builder.
Create a help project for your library in the Visual Studio solution.
Remove Build check mark from Debug solution configuration to build the documentation project only in Release configurations, since Debug is most often used during development. For release build testing or performance testing you can either create another solution configuration or simply switch the option back and forth.
Build the documentation once
Include the documentation file in the solution so it shows up in the Pending Changes window when the file changes.
Kindly ask your developers to build with the release configuration that updates the documentation before check-in or use any other policy to require updating the documentation.
I don't think it makes sense to build the documentation all the time, but it helps to make it easy to do so that when you actually need an updated version - you can build it really quickly.
You can also make sure to use FXCop or StyleCop (forgot which) and configure it to treat missing XML documentation warnings as errors - at least in release builds. Doing it for debug configurations might slow down development and make changes difficult since developers often want to try things out before committing to a final implementation worth documenting.
EDIT*
Sandcastle provides various output formats as shown in the project properties:
I would like to mention ForgeDoc (of which I'm the developer), it could be what you are looking for. It is designed to be fast and simple, and it generates proper MSDN-like HTML output. It also has a command-line interface so you can just call it from a post-build event command in Visual Studio.
I think you should give it a try, as I would really like to hear about your thoughts.
I saw this tutorial for writing a JSON-RPC server for SWI-Prolog. Unfortunately, all it does is add two numbers. I'm wondering if there exists a RPC server for SWI-Prolog that can define new rules and answer general Prolog queries, returning JSON lists, etc?
When you take a tour on SWI-Prolog website, proudly self-powered, you can see at work some of the features offered by http package.
It's a fairly large range of tools, and to grasp the basic of the system, the easiest way it's to follow the specific How to section, step by step. There is a small bug you should be aware in the LOD Crawler: add an option on line 42 of lod.pl:
...
; rdf_load(URI2, [format(xml)]),
....
or you will probably get
Internal server error
Domain error: content_type' expected, found text/xml;charset=UTF-8'
when running the sample.
An important feature of the IDE it's the ability to debug the HTTP requests.
When done with the HowTo, you can take a look to Cliopatria, dedicated to interfacing RDF to HTML. It come with a pirates demo, I must say I find it a bit too 'crude' for my taste, and I don't know about YUI, used in the award winning MultimediaN project. Then I've used Bootstrap to gain a modern look for the front end, with appreciable result (I'm sorry I can't - yet - publish it, need more time to engineering the system).
HTH
In the company I am I was asked to write an autoupdate function a la chrome. I.e. It should check periodically whether a new version is available, download the new version and apply it silently the next time the application starts.
I already have something up and running but it is more like a dirty hack than something I feel happy about it. So, I would like to know how to design and implement such a solution. My horrible hack works as this:
Have a mechanism to check whether a new version exists (a database query or a web service)
Download a full zip with the whole new version.
Check file signature. If everything went alright, set a registry value: must update to true.
When the application restarts, if the must update value is true, launch an update program and exist.
The update deletes the contents of the application folder, unzips the update and replaces the old contents, launches the application and exits.
Now, I would like to change it, so it works cleaner. I am planning to send the update as a bsdiff file. It gets downloaded. But the question is, what happens next?
When do apply the update?
Who is in charge of applying the patch? is it the program itself or is it a third program, as I did, which is in charge of applying the patch and relaunch the application?
If your going down the C++ route you can go to chromium and download the Chrome source code and dig around to see how the update is done, this might give you a better idea on how to approach it. Here's an article that might help.
If your familiar with .NET the recently release nuget also has an auto update feature that might be useful to look at, you can get the source code from here. David Ebbo has a blog about how its done here.
I'm not up to date on Delphi but you might be able to use either of the above options.
The workflow you proposed is more or less like it should work, but there's no need to re-invent the wheel - there are plenty libraries out there that will do this for you. Using a 3rd party library has the benefit of keeping your code cleaner while making sure the dirty process of auto-update is contained and working flawlessly.
Trust me, I know. I'm the author of NAppUpdate, an app update framework for .NET (which you might want to try out or learn from).
So, after giving it a lot of though, this is what I came with (for active directory I will refer to the directory where the main program lies, active program is the main program and update program is the one that replaces the active program and its resource files):
The active program checks if there is a new version every certain amount of time. If so, download it
Prepare new version in a separate folder (this can be done by copying the contents of the directory with the program to a subdirectory and applying a binary patch, or simply unziping the new version).
Set a flag that indicates that a new version is ready.
When a program is exiting (and one has to control for different interrupts here):
The active program checks the new version ready flag. Launch the update program and exit.
The update program checks if it can write in the active directory. If so, replaces the contents with the prepared version.
The update program has to recheck links and update them accordingly.
So guys, if you have a better workflow, please tell me.
You could literally use the Google Chrome update workflow by using the Google Chrome updater:
http://code.google.com/p/omaha/
They open sourced it Feb 2009.
Everyone managing open-source-software runs into the problem, that with the time the process of releasing a new version gets more and more work. You have to tag the release in your version-control, create the distributions (that should be easy with automated builds), upload them to your website and/or open-source-hoster. You have to announce the new release with nearly the same message on chosen web-forums, the news-system on sourceforge, mailinglists and your blog or website. And you have to update the entry of your software on freshmeat. Possible more tasks have to be done for the release.
Do you developed techniques to automate some of these tasks? Does software exist that supports you with this?
Pragmatic Project Automation shows how to do all of that. They use Ant for practically everything in the book, so if you know Ant you can make different targets to do any step in the build-release cycle.
For my Perl stuff, I wrote Module::Release. In the top-level directory I type a single command:
% release
If checks several things and dies if anything is wrong. If everything checks out, it uploads the distribution.
It automates my entire process:
Test against multiple versions of Perl
Test distribution files
Check the status of source control
Check for code and distribution quality metrics
Update changes file
Determine new version number
Release code to multiple places
Tag source control with new version number
Everyone seems to write their own release automator though. Most people like their process how they like their process, so general solutions don't work out that well socially.
Brad Fitzpatrick has ShipIt which is a Perl program to automate releases. There's slightly more info in his original announcement.