I am attempting to concat a number of js files within a nested directory structure to a single file in a different location. They have to be concatenated in a specific order and I cannot find a way of changing the default order in which gulp's glob search retrieves nested files. I have tried various glob patterns to no avail.
My directory structure is as follows:
components
- componentA
- controllers
- controllerA1.js
- controllerA2.js
- services
- serviceA1.js
- configA.js
- moduleA.js
- componentB
- controllers
- controllerB1.js
- controllerB2.js
- services
- serviceB1.js
- configB.js
- moduleB.js
I want the files to concatenate to a single file in the following order:
configA.js
moduleA.js
controllerA1.js
controllerA2.js
serviceA1.js
configB.js
moduleB.js
controllerB1.js
controllerB2.js
serviceB.js
So that gulp iterates into each component and iterates down through as far as it can go before moving onto the next component and doing the same.
Instead it concatenates in the following order:
configA.js
moduleA.js
configB.js
moduleB.js
controllerA1.js
controllerA2.js
serviceA1.js
controllerB1.js
controllerB2.js
serviceB1.js
In other words it goes into a top level directory, iterates through each of the top level files in that directory and then jumps to the next top level directory and does the same, before returning to the first top level directory and iterating through the next level down etc etc.
I've tried a couple of different methods which have each presented problems.
I have tried using the gulp-recursive-folder plugin to customise the iteration order as follows:
gulp.task('generateTree', recursivefolder({
base: './components',
exclude: [ // exclude the debug modules from thus build
//'debug-modules'
]
}, function(folderFound){
//This will loop over all folders inside pathToFolder main and recursively on the children folders, secondary
//With folderFound.name gets the folderName
//With folderFound.path gets all folder path found
//With folderFound.pathTarget gets the relative path beginning from options.pathFolder
return gulp.src(folderFound.path + "/**/*.js")
.pipe($.concat("app.js"))
.pipe(gulp.dest('./build/assets/js/'));
}));
This iterates in the order I want but I believe it is writing the first top level dir as one stream and then writing the second dir as another stream so that the second stream overwrites the first. So I am left with only the following files being concatenated:
configB.js
moduleB.js
controllerB1.js
controllerB2.js
serviceB.js
So I've also tried using the add-stream plugin to recursively add to the same stream before writing to file. I won't bore anyone with the details but basically I can't get this to work as desired either. Can anyone recommend a post/tutorial/plugin? Thanks.
gulp.src() respects the ordering of globs that are passed to it and emits files in the same order. That means if you explicitly pass a glob for each component to gulp.src() it will first emit the files for the first component, then for the second component and so on:
gulp.task('default', function() {
return gulp.src([
'components/componentA/**/*.js',
'components/componentB/**/*.js'
])
.pipe($.concat('app.js'))
.pipe(gulp.dest('./build/assets/js/'));
});
Obviously you don't want to maintain that array manually. What you want to do is generate the array based on the components that are available in your project. You can use the glob module for that:
var glob = require('glob');
gulp.task('default', function() {
return gulp.src(glob.sync('components/*').map(c => c + '/**/*.js'))
.pipe($.concat('app.js'))
.pipe(gulp.dest('./build/assets/js/'));
});
Related
I'm very new to Gulp, trying to figure out how to exclude more than one file from a glob... for instance this works for a single file:
return gulp.src(['assets/js/src/!(app)*.js'])
But is it possible / what would the syntax be for excluding multiple files? E.g. return gulp.src(['assets/js/src/!(app,test)*.js']) but obviously this doesn't work...
EDIT - I am later adding back in the files app.js & test.js so they are added at the end of the minified file.
e.g. (this works for a single file only:) return gulp.src(['assets/js/src/!(app)*.js', 'assets/js/src/app.js'])
I am looking for a solution to add multiple files back in at the end so I can control the order of the minified file. Also, it's important that I can use a glob to catch all files that will be added in future so I don't want to list them explicitly.
In glob the OR pattern is defined like {A,B}. So, this should work:
gulp.src(['assets/js/src/*.js', '!assets/js/src/*{app,test}*.js'])
[You added some crucial information to your original post that completely changes the answer! You should edit your post with the new condition. So, with thanks to #dloeda glob negation:
Now you might try gulp-filter
const gulp = require('gulp');
// const uglify = require('gulp-uglify');
const filter = require('gulp-filter');
gulp.task('default', () => {
// Create filter instance inside task function
const f = filter(['assets/js/src/*.js', '!assets/js/src/*{app,test}*.js'], {restore: true});
return gulp.src('src/**/*.js')
// Filter a subset of the files
.pipe(f)
// Run them through a plugin
// .pipe(uglify())
// Bring back the previously filtered out files (optional)
.pipe(f.restore)
.pipe(gulp.dest('dist'));
});
However, it is not clear to me from the gulp-filter documentation that the f.restore adds the filtered files at the end of the stream or back in their original location in the src stream. If you find that it doesn't put it at the end, let me know and it could be modified to do so by another method.
Also see Contra's answer to adding to src if you are using gulp4.0 it is very easy to add to the src.
Alternatively, gulp-add-src looks very promising and I just discovered it so you could try this alternative code:
var addsrc = require('gulp-add-src');
gulp.task('build', function () {
// start with excluding app.js and test.js
return gulp.src(['assets/js/src/*.js', '!assets/js/src/*{app,test}*.js'])
// .pipe(whatever)
.pipe(addsrc.append('assets/js/src/app.js')) // append app.js to the end of the stream
.pipe(uglify()) // we minify everything
.pipe(gulp.dest('dist')); // and write to dist
});
In Laravel 5 I am trying to create two different css files for my frontend site and backend site (cms). The source files are in two different directories.
The default value for assets in
first the backend
elixir.config.assetsDir = 'resources/backend/';
elixir(function (mix) {
mix.less('backend.less');
});
Second the frontend
elixir.config.assetsDir = 'resources/frontend/';
elixir(function (mix) {
mix.less('frontend.less');
});
Both are in the same gulpfile.js.
These are the directories (Laravel 5)
resources
backend
less
backend.less
frontend
less
frontend.less
Only the frontend file is compiled to public/css/frontend.css.
I also tried
mix.less('frontend.less', null, 'resources/frontend/');
though this is working for mixing script files it is not working for mixing less files.
**Update 28-3-2015 **
There seems to be no solution for my problem. When I do:
elixir.config.assetsDir = 'resources/frontend/';
mix.less('frontend.less');
elixir.config.assetsDir = 'resources/backend/';
mix.less('backend.less');
Only the last one (backend) is executed. When I place the last two lines in comments the first one (frontend )is executed. It's Ok for now because the backend styles should not change very often but it would be very nice to mix multiple less files from multiple resource folders to multiple destination folders.
Try:
elixir(function(mix) {
mix.less([
'frontend/frontend.less',
'backend/backend.less'
], null, './resources');
});
Instead of your variant:
elixir(function(mix) {
elixir.config.assetsDir = 'resources/frontend/';
mix.less('frontend.less');
elixir.config.assetsDir = 'resources/backend/';
mix.less('backend.less');
});
Try this code:
elixir.config.assetsDir = 'resources/frontend/';
elixir(function(mix) {
mix.less('frontend.less');
});
elixir.config.assetsDir = 'resources/backend/';
elixir(function(mix) {
mix.less('backend.less');
});
I have been playing around with this for a couple days and the best option I've found so far is as follows.
First leave your resources files in the default location, so for less files look in resources/assets/less. Then to separate the files into your front and back end resources add sub folders in the specific resource folder like so,
resources/assets/less/frontend/frontend.less
resources/assets/less/backend/backend.less
Now call each one like so..
mix.less('frontend/frontend.less', 'public/css/frontend/frontend.css');
mix.less('backend/backend.less', 'public/css/backend/backend.css');
The second parameter provided to each mix.less can point to wherever you want it to.
You can't split at the highest level directly in the resource root, but it still allows some separation, and everything compiled in one gulp.
I have found the following to work:
elixir(function (mix) {
mix
.less(['app.less'], 'public/css/app.css')
.less(['bootstrap.less'], 'public/css/bootstrap.css');
});
The key things to notice:
provide the file name in the destination, i.e. writing public/css/app.css instead of public/css/
chain the .less calls instead of making two separate mix.less() calls
Works for me with laravel-elixir version 3.4.2
Given a com.box.androidlib.Utils.BoxUtils.BoxFolder object, I would like to recurse the object’s parent folders to retrieve the path from the root.
I would hope to do this with something like the below code, where currentBoxFolder is retrieved using Box.getAccountTree(…) as done in the Browse class of the included sample code. However, the getParentFolder returns null (for non-root folders for which I expect it to be non-null).
I figure that it might be possible to populate the parent variable by modfiying the source to fetch additional attributes, but I was able to. Any suggestions?
List<BoxFolder> parentDirs = new ArrayList<BoxFolder>();
parentDirs.add(new BoxFolderEntry(currentBoxFolder));
BoxFolder parent = currentBoxFolder.getParentFolder();
while(parent != null)
{
parentDirs.add(0, parent);
parent = parent.getParentFolder();
}
If the end goal is for you to know the path from the root to a folder, there are a couple ways to solve this:
OPTION 1:
Maintain your own map of folder_ids and folder_names as your application fetches them. Presumably, in order to get the id of currentBoxFolder, you would have had to do getAccountTree() calls on all its parents before-hand. So if that's the case, you could maintain 2 maps:
Folder ID => Parent Folder ID
Folder ID => Folder Name
From those two maps, you should always be able to get the path from the root.
OPTION 2:
There are 2 params that can be added to the Box.getAccountTree() method that will allow you to know the path:
"show_path_ids"
"show_path_names"
These params haven't been documented yet (we'll do that), but they will cause BoxFolder.getFolderPathIds() and BoxFolder.getFolderPath() to return values such as:
"/5435/4363"
"/blue folder/green folder"
I'm trying to create an SSIS package to process files from a directory that contains many years worth of files. The files are all named numerically, so to save processing everything, I want to pass SSIS a minimum number, and only enumerate files whose name (converted to a number) is higher than my minimum.
I've tried letting the ForEach File loop enumerate everything and then exclude files in a Script Task, but when dealing with hundreds of thousands of files, this is way too slow to be suitable.
The FileSpec property lets you specify a file mask to dictate which files you want in the collection, but I can't quite see how to specify an expression to make that work, as it's essentially a string match.
If there's an expression within the component somewhere which basically says Should I Enumerate? - Yes / No, that would be perfect. I've been experimenting with the below expression, but can't find a property to which to apply it.
(DT_I4)REPLACE( SUBSTRING(#[User::ActiveFilePath],FINDSTRING( #[User::ActiveFilePath], "\", 7 ) + 1 ,100),".txt","") > #[User::MinIndexId] ? "True" : "False"
Here is one way you can achieve this. You could use Expression Task combined with Foreach Loop Container to match the numerical values of the file names. Here is an example that illustrates how to do this. The sample uses SSIS 2012.
This may not be very efficient but it is one way of doing this.
Let's assume there is a folder with bunch of files named in the format YYYYMMDD. The folder contains files for the first day of every month since 1921 like 19210101, 19210201, 19210301 .... all the upto current month 20121101. That adds upto 1,103 files.
Let's say the requirement is only to loop through the files that were created since June 1948. That would mean the SSIS package has to loop through only the files greater than 19480601.
On the SSIS package, create the following three parameters. It is better to configure parameters for these because these values are configurable across environment.
ExtensionToMatch - This parameter of String data type will contain the extension that the package has to loop through. This will supplement the value to FileSpec variable that will be used on the Foreach Loop container.
FolderToEnumerate - This parameter of String data type will store the folder path that contains the files to loop through.
MinIndexId - this parameter of Int32 data type will contain the minimum numerical value above which the files should match the pattern.
Create the following four parameters that will help us loop through the files.
ActiveFilePath - This variable of String data type will hold the file name as the Foreach Loop container loops through each file in the folder. This variable is used in the expression of another variable. To avoid error, set it to a non-empty value, say 1.
FileCount - This is a dummy variable of Int32 data type will be used for this sample to illustrate the number of files that the Foreach Loop container will loop through.
FileSpec - This variable of String data type will hold the file pattern to loop through. Set the expression of this variable to below mentioned value. This expression will use the extension specified on the parameters. If there are no extensions, it will *.* to loop through all files.
"*" + (#[$Package::ExtensionToMatch] == "" ? ".*" : #[$Package::ExtensionToMatch])
ProcessThisFile - This variable of Boolean data type will evaluate whether a particular file matches the criteria or not.
Configure the package as shown below. Foreach loop container will loop through all the files matching the pattern specified on the FileSpec variable. An expression specified on the Expression Task will evaluate during runtime and will populate the variable ProcessThisFile. The variable will then be used on the Precedence constraint to determine whether to process the file or not.
The script task within the Foreach loop container will increment the counter of variable FileCount by 1 for each file that successfully matches the expression.
The script task outside the Foreach loop will simply display how many files were looped through by the Foreach loop container.
Configure the Foreach loop container to loop through the folder using the parameter and the files using the variable.
Store the file name in variable ActiveFilePath as the loop passes through each file.
On the Expression task, set the expression to the following value. The expression will convert the file name without the extension to a number and then will check if it evaluates to greater than the given number in the parameter MinIndexId
#[User::ProcessThisFile] = (DT_BOOL)((DT_I4)(REPLACE(#[User::ActiveFilePath], #[User::FileSpec] ,"")) > #[$Package::MinIndexId] ? 1: 0)
Right-click on the Precedence constraint and configure it to use the variable ProcessThisFile on the expression. This tells the package to process the file only if it matches the condition set on the expression task.
#[User::ProcessThisFile]
On the first script task, I have the variable User::FileCount set to the ReadWriteVariables and the following C# code within the script task. This increments the counter for file that successfully matches the condition.
public void Main()
{
Dts.Variables["User::FileCount"].Value = Convert.ToInt32(Dts.Variables["User::FileCount"].Value) + 1;
Dts.TaskResult = (int)ScriptResults.Success;
}
On the second script task, I have the variable User::FileCount set to the ReadOnlyVariables and the following C# code within the script task. This simply outputs the total number of files that were processed.
public void Main()
{
MessageBox.Show(String.Format("Total files looped through: {0}", Dts.Variables["User::FileCount"].Value));
Dts.TaskResult = (int)ScriptResults.Success;
}
When the package is executed with MinIndexId set to 1948061 (excluding this), it outputs the value 773.
When the package is executed with MinIndexId set to 20111201 (excluding this), it outputs the value 11.
Hope that helps.
From investigating how the ForEach loop works in SSIS (with a view to creating my own to solve the issue) it seems that the way it works (as far as I could see anyway) is to enumerate the file collection first, before any mask is specified. It's hard to tell exactly what's going on without seeing the underlying code for the ForEach loop but it seems to be doing it this way, resulting in slow performance when dealing with over 100k files.
While #Siva's solution is fantastically detailed and definitely an improvement over my initial approach, it is essentially just the same process, except using an Expression Task to test the filename, rather than a Script Task (this does seem to offer some improvement).
So, I decided to take a totally different approach and rather than use a file-based ForEach loop, enumerate the collection myself in a Script Task, apply my filtering logic, and then iterate over the remaining results. This is what I did:
In my Script Task, I use the asynchronous DirectoryInfo.EnumerateFiles method, which is the recommended approach for large file collections, as it allows streaming, rather than having to wait for the entire collection to be created before applying any logic.
Here's the code:
public void Main()
{
string sourceDir = Dts.Variables["SourceDirectory"].Value.ToString();
int minJobId = (int)Dts.Variables["MinIndexId"].Value;
//Enumerate file collection (using Enumerate Files to allow us to start processing immediately
List<string> activeFiles = new List<string>();
System.Threading.Tasks.Task listTask = System.Threading.Tasks.Task.Factory.StartNew(() =>
{
DirectoryInfo dir = new DirectoryInfo(sourceDir);
foreach (FileInfo f in dir.EnumerateFiles("*.txt"))
{
FileInfo file = f;
string filePath = file.FullName;
string fileName = filePath.Substring(filePath.LastIndexOf("\\") + 1);
int jobId = Convert.ToInt32(fileName.Substring(0, fileName.IndexOf(".txt")));
if (jobId > minJobId)
activeFiles.Add(filePath);
}
});
//Wait here for completion
System.Threading.Tasks.Task.WaitAll(new System.Threading.Tasks.Task[] { listTask });
Dts.Variables["ActiveFilenames"].Value = activeFiles;
Dts.TaskResult = (int)ScriptResults.Success;
}
So, I enumerate the collection, applying my logic as files are discovered and immediately adding the file path to my list for output. Once complete, I then assign this to an SSIS Object variable named ActiveFilenames which I'll use as the collection for my ForEach loop.
I configured the ForEach loop as a ForEach From Variable Enumerator, which now iterates over a much smaller collection (Post-filtered List<string> compared to what I can only assume was an unfiltered List<FileInfo> or something similar in SSIS' built-in ForEach File Enumerator.
So the tasks inside my loop can just be dedicated to processing the data, since it has already been filtered before hitting the loop. Although it doesn't seem to be doing much different to either my initial package or Siva's example, in production (for this particular case, anyway) it seems like filtering the collection and enumerating asynchronously provides a massive boost over using the built in ForEach File Enumerator.
I'm going to continue investigating the ForEach loop container and see if I can replicate this logic in a custom component. If I get this working I'll post a link in the comments.
The best you can do is use FileSpec to specify a mask, as you said. You could include at least some specs in it, like files starting with "201" for 2010, 2011 and 2012. Then, in some other task, you could filter out those you don't want to process (for instance, 2010).
I have two different versions of linux/unix each running cfengine3. Is it possible to have one promises.cf file I can put on both machines that will copy different files based on what os is on the clients? I have been searching around the internet for a few hours now and have not found anything useful yet.
There are several ways of doing this. At the simplest, you can simply have different files: promises depending on the operating system, for example:
files:
ubuntu_10::
"/etc/hosts"
copy_from => mycopy("$(repository)/etc.hosts.ubuntu_10");
suse_9::
"/etc/hosts"
copy_from => mycopy("$(repository)/etc.hosts.suse_9");
redhat_5::
"/etc/hosts"
copy_from => mycopy("$(repository)/etc.hosts.redhat_5");
windows_7::
"/etc/hosts"
copy_from => mycopy("$(repository)/etc.hosts.windows_7");
This example can be easily simplified by realizing that the built-in CFEngine variable $(sys.flavor) contains the type and version of the operating system, so we could rewrite this example as follows:
"/etc/hosts"
copy_from => mycopy("$(repository)/etc.$(sys.flavor)");
A more flexible way to achieve this task is known in CFEngine terminology as "hierarchical copy." In this pattern, you specify an arbitrary list of variables by which you want files to be differentiated, and the order in which they should be considered, from most specific to most general. When the copy promise is executed, the most-specific file found will be copied.
This pattern is very simple to implement:
# Use single copy for all files
body agent control
{
files_single_copy => { ".*" };
}
bundle agent test
{
vars:
"suffixes" slist => { ".$(sys.fqhost)", ".$(sys.uqhost)", ".$(sys.domain)",
".$(sys.flavor)", ".$(sys.ostype)", "" };
files:
"/etc/hosts"
copy_from => local_dcp("$(repository)/etc/hosts$(suffixes)");
}
As you can see, we are defining a list variable called $(suffixes) that contains the criteria by which we want to differentiate the files. All the variables contained in this list are automatically defined by CFEngine, although you could use any arbitrary CFEngine variables. Then we simply include that variable, as a scalar, in our copy_from parameter. Because CFEngine does automatic list expansion, it will try each variable in turn, executing the copy promise multiple times (one for each value in the list) and copy the first file that exists. For example, for a Linux SuSE 11 machine called superman.justiceleague.com, the #(suffixes) variable will contain the following values:
{ ".superman.justiceleague.com", ".superman", ".justiceleague.com", ".suse_11",
".linux", "" }
When the file-copy promise is executed, implicit looping will cause these strings to be appended in sequence to "$(repository)/etc/hosts", so the following filenames will be attempted in sequence: hosts.superman.justiceleague.com, hosts.justiceleague.com, hosts.suse_11, hosts.linux and hosts. The first one to exist will be copied over /etc/hosts in the client, and the rest will be skipped.
For this technique to work, we have to enable "single copy" on all the files you want to process. This is a configuration parameter that tells CFEngine to copy each file at most once, ignoring successive copy operations for the same destination file. The files_single_copy parameter in the agent control body specifies a list of regular expressions to match filenames to which single-copy should apply. By setting it to ".*" we match all filenames.
For hosts that don't match any of the existing files, the last item on the list (an empty string) will cause the generic hosts file to be copied. Note that the dot for each of the filenames is included in $(suffixes), except for the last element.
I hope this helps.
(p.s. and shameless plug: this is taken from my upcoming book, "Learning CFEngine 3", published by O'Reilly)