I'm trying to figure out if it is worthwhile moving to webpack, I am leaning towards saying no - figuring I have more important stuff to do - but I would like to see some practical examples of how to make webpack work.
So if I have the following Gulp.js task how would I do them as a webpack task?
gulp.task('subpaths', ['clean_subpaths'],function() {
//Minify and copy all JavaScript (except vendor scripts)
gulp.src(paths.subpath_scripts)
.pipe(fileinclude({
prefix: '##',
basepath: '#file'
}))
.pipe(contextswitch())
.pipe(uglify())
.pipe(strip())
.pipe(rename(function (path) {
path.basename += timestamp;
}))
.pipe(gulp.dest('public/longcache/javascripts/subpath'));
});
So the tasks above do -
include files inside of other files for processing.
run the piped content through my own defined code - I guess in webpack
that would be run my own plugin?
uglify
remove console statements
rename the output file so it has a version.
write out to a specific location
The first item -- include files inside of other files -- is one of webpack's biggest benefits, speaking as someone coming from an all grunt/gulp workflow. Instead of managing dependencies externally (in your build tools), and having to ensure that files are combined correctly based on their runtime dependencies, with webpack your dependencies are part of the codebase, as require() expressions. You write your app as a collection of js modules and each module loads the modules it relies on; webpack understands those dependencies and bundles accordingly. If you're not already writing your js in a modular fashion, it's a big shift, but worth the effort. This is also a reflection of what webpack is meant for -- it's conceptually oriented around building a js application, not bundling some js that your site uses.
Your second item would more likely be a custom loader, which is easier to write than a custom plugin. Webpack is very extendable, throughout, but writing custom integrations is poorly documented.
Webpack's Uglify plugin will also remove console.logs, super easy.
Specifying output details is part of your basic webpack config, just a couple of options to fill in.
Related
I have created a basic re-frame app using the template lein new re-frame my-proj. This particular project is interfacing with a framework (ecsy) that requires some ES6 modules and ES6 classes e.g code that is generated by the user, not simply called from cljs. Since Clojurescript does not currently generate ES6 code, I have created some wrapper ES6 modules in my project from which I plan to call into cljs code.
After much futzing, I have discovered that it's not necessary to turn these js wrapper modules into full-blown npm modules under 'node_modules', but rather I can simply put them in a sub-directory of my project e.g resources/libs and then add this directory to :js-options in shadow-cljs.edn:
{:lein true
:nrepl {:port 8777}
:builds {:app {:target :browser
:output-dir "resources/public/js/compiled"
:asset-path "/js/compiled"
:modules {:app {:init-fn re-pure-ecs-simple.core/init
:preloads [devtools.preload]}}
:devtools {:http-root "resources/public"
:http-port 8280}
;;add this
:js-options {:js-package-dirs ["node_modules" "resources/libs"]}}}}
So everything works fine now, but the only problem is if I edit any of the js files in 'resources/public' the lein.bat dev compiler doesn't detect the changes. I can go in and make a mock change to a '.cljs' file, which does cause the compiler to re-compile, but it still doesn't pick up on the changes made to the '.js' file (or '.mjs' file). I have to kill, via ctrl-c, the compiler and re-start it to get the change propagated. Unfortunately, this takes about 15 seconds to compile since it's a full compile.
I tried adding 'resources/libs' to my 'project.clj':
:source-paths ["src/clj" "src/cljs" "resources/libs"]
to no avail.
I also tried deleting the compiled js files from <my_proj-dir>/resources/public/js/compiled/cljs-runtime:
rm 'module$node_modules$systems.js' 'module$node_modules$systems.js.map'
In this case, the compiler does re-generate the files (upon doing a mock .cljs change), but it still uses the prior version e.g. it must be using a cached version.
Is there a way I can add a watcher to this js directory so I can do incremental builds? There's obviously a watcher on the 'src/cljs' directory already. I have consulted the shadow-cljs user gd. but honestly, it's a little overwhelming.
You can follow the directions for requiring local .js in the User's Guide.
Basically you put the .js files into the same folder as your .cljs file and then just require it via (:require ["./foo.js" :as foo]). No additional config required.
I would like to use the ECMAScript 6 module system in a front-end project, so that the interdependencies of the code were more clear than simply loading "all that might be needed" up front, in the HTML.
However, having the following line in the main JavaScript file does not work:
import fuzLogin from 'fuzLogin'
The error in the browser's console is: can't find variable: require
The compiled code (created by Babel) is:
var _fuzLogin = require("fuzLogin");
var _fuzLogin2 = _interopRequireDefault(_fuzLogin);
function _interopRequireDefault(obj) { return obj && obj.__esModule ? obj : { default: obj }; }
Is ECMAScript 6 module system supposed to work, for compiled code, with WebStorm 10?
Should I maybe add some external dependency in my HTML, to provide the missing require?
Are there other ways I could reach a modular front-end orchestration of my JavaScript side?
I think that you're babel configuration is set up to use commonjs that transpiles with require (requirejs)... so, in order to work with that configuration you need to include requirejs: http://requirejs.org/
I found two ways that fulfil what I was looking for, in slightly different ways:
jspm
Rollup
JSPM allows on-the-fly loading of ES2015 modules, so that the transpiling happens in the browser. This is pretty awesome, really, and something I wasn't expecting.
In addition, JSPM also provides traditional build tools for doing the bundling for production.
But I actually chose to go with Rollup.
Rollup gathers all kinds of build systems together, and is based on ES2015 packaging, providing what I was after. Most important for me were the brilliant blog posts by Jason Lengstorf (just 1 and 2 weeks old, btw) that walk one through the whole practical setup.
References:
jspm-trial (GitHub) repo that I did, experimenting these things
Smaller, More Efficient JavaScript Bundling Using Rollup (blog, Aug 2016)
I'm just wading into setting up an Angular2 app and I'm not sure what is the best way to configure the index.html. I have two examples to go from: a javascript version and a typescript version. Since Angular2 is using typescript I thought the typescript version makes sense. The javascript version is coming from the new Angular2 book from Ari Lerner. Here are the 2 examples:
Typescript configuration:
System.config({
transpiler: 'typescript',
typescriptOptions: {emitDecoratorMetadata: true},
packages: {app: {defaultExtension: 'ts'}}
});
System.import('app/app');
Javascript configuration:
System.config({
packages: {
app: {
format: 'register',
defaultExtension: 'js'
}
}
});
System.import('app/app.js')
.then(null, console.error.bind(console));
My question is which one is the best to use, and why?
Angular2 app can be written with Typescript Or ES5/ES6. While learning angular2, you must have found that angular2 website provides two kinda docs (actually three), check now if you haven't (you will be able to see Angular2 for Typescript, Angular2 for Javascript, Angular2 for Dart.
Now it is up to you for which platform you are going to write your Angular2 app.
1). If you plan to write angular2 app with typescript, obviously your targeted web browser won't understand it and so some mechanism has to convert typescript code into javascript that can be understood by targeted web-browswer. So first snippet of code means convert/transpile .ts file into .js so your browser will be to understand.
2). If you plan to write angular2 app with Javascript/ES5/ES6 then of course, second part of your question.
Suggestion: Go with first as Angular2 itself has been written in typescript from the scratch.
First, I think that there is a small mistake in the JavaScript configure:
System.config({
packages: {
app: {
format: 'register',
defaultExtension: 'js'
}
}
});
System.import('app/app'); // <------
.then(null, console.error.bind(console));
Regarding your question:
I think that first configuration (transpiling on the fly) is fine for small applications where performances dosn't matter . As a matter of fact, there are some additional processing (in the browser) to transpile the application code into something executable by the browser at the level of module loading. Such approach is fine for applications executed in plunkr.
I would say that the JavaScript configuration that leverages pre compilation of TypeScript content is more efficient since the browser directly execute module code in JavaScript (no transpiling done by the browser itself).
That said, if you want to have something really efficient (production read application), you need to package your application:
1) to precompile your TypeScript code into JavaScript
2) to gather JavaScript code into a minimum set of JS files to minimize the number of files to load
3) to minify JavaScript code to reduce its size
The first approach doesn't allow all these three points. The second approach allows precompilation and minification but you can't gather your whole application code into a single file.
To do that you need to leverage the outFile parameter of the tsc compiler. This way you will have all modules into a single file that you can minify. In this case, you don't need anymore to configure SystemJS. You need to import the main module...
These questions could give you additional hints:
How to combine (minify) compiled Angular 2 components?
Angular2 TypeScript transpiler with Minification / Uglify
On these days, a good approach to obtain a great performance in SPA application is prepare a gzipped client side bundle from a few gulp tasks.
Based on these, an awful approach to debug is consider the use of a full bundle unminified # dev environment. The question is about possible of use a gulp browserify task and gulp inject to unroll the client bundle in separated files like was developed.
I mean, maybe would be possible inject a bundle or a couple of files with a browserify boilerplate to resolve a bunch of require's and module.exports statements.
Thoughts?
The correct answer is use an option that gulp-browserify provides to run a complete src tree instead use a bundle. just set a optional flag debug: true as follows, in example:
gulp.src('./app/js/app.js'). // this path is the entry point
pipe(browserify({
insertGlobals: true,
debug: true
}));
Is there a simple HTML preprocessor available to use for existing HTML code, that I won't need to modify my existing html to conform with preprocessors syntax?
I'm developing a mobile app with a few pages built with html, however i've started having the problem of having to update each page for when making changes to similiar content (header, footer etc) I don't want to duplicate content and have mismatches, rather i'd like to use a preprocessor that has an include method (something like how php does it), so that I can include these similiar contents.
I've looked at stuff like HAML & Jade, but it seems they have a strict syntax you need to follow with indents and the sort or editing html to include pipes on each line, else things wont compile.
Like I said I have existing html I would just like to cut & paste my HTML into different files, include them and stick with that workflow as I think it's the simplest.
But if anyone has any other ideas how I can tackle my situation that is welcomed too.
I guess since your requirement is to only include files that you don't need a full blown template system . You could take a look at gulp-include which is a gulp plugin to include files. Using gulp has the advantage that gulp comes with a watch feature to watch the file system for changes - whenever a change is detected a task can be triggered.
An example how your gulpfile.js could look like
var gulp = require('gulp');
var include = require('gulp-include');
gulp.task('template', function() {
return gulp
.src('*.html')
.pipe(include())
.pipe(gulp.dest('dist'))
});
gulp.task('dev', function() {
gulp.watch('*.html', ['template']);
});
gulp.task('default', ['template']);
This gulpfile registers a 'template' task that takes all html files and processes the file's contents with the gulp-include plugin. The template task is registed as default gulp task. So if you invoke gulp without any command line args then the template task is run. The gulp 'dev' task allows you to run gulp dev from the command line that watches all html files for changes and triggers the template task whenever a html file changes.
The gulp include plugin scans your html files for something like
= include relative/path/to/file.html
and includes 'file.html' contents.
The include syntax is quite well documented on the gulp-include web site.