I am taking coverage using jscoverage . Now the problem is after storing the report for say 15 times it stops working .So i get a report with some lines covered . Now if i again try to start the coverage freshly and try to merge the jscoverage.json of the new and the old files then it gets corrupted . Can someone suggest how to merge two jscoverage.json files ??
NOte: the coverage i am taking is for the same js file so directory and everything remains the same .
You can merge reports using JSCover (successor of jscoverage). Use the following command,
java -cp JSCover-all.jar jscover.report.Main --merge REPORT-DIR1 REPORT-DIR2 REPORT-DIR3...DEST-DIR
See,
http://tntim96.github.io/JSCover/manual/manual.xml#reportMerging
Related
So, I have two csv files that I need to compare. However, I am not sure if I am using compare active file in Visual Code will help me.
File 1.csv --> the starting point
id,name
12a,mark
134,jon
151,pete
z18,sab
329,lin
m32,sam
kla,kop
l5h,ming
File 2.csv --> modified one, basically made some changes (delete two id)
id,name
12a,mark
134,jon
151,pete
l5h,ming
kla,kop
329,lin
So, I want to use visual code to compare between these two files and find out which line from the 1.csv that already been removed. If I use compare active file in visual code, it only gave me which line that different from the original one. But I cannot find between the original file (1.csv) and the modified one (2.csv), which data/id has been removed.
I am not sure whether visual code can do this or what keyword that I need to use in google to find this solution. So I am wondering if anyone could help me with this.
Ps
The real files that I need to deal with in the same situation have more than a thousand id.
Sorry if this has been resolved or asked somewhere in StackOverflow, English is not my native and I don't know what keyword that I should use in StackOverflow for this.
Thanks!
If you multi-select (Ctrl+Left-Click on Windows) both files (file1.csv, file2.csv) in the VS Code (File) Explorer window, then select Compare Selected from the right-click menu in the (File)Explorer window, a diff window will open with the comparison you desire.
Note the first-selected file will appear on the left pane of the diff window.
I think what I am looking to do is fairly simple - I just can't wrap my head around it.
I've got a repo in AzDo. This repo contains configuration files for firewalls. This is how we manage changes in these configurations.
I've got a simple build pipeline that copies the relevant files and creates an artifact.
I have a release pipeline that gets the files onto the on-prem machine in my Deployment Group. The files show up in c:\azagent\r1\_work\<artifact folder>.
As part of this pipeline I am looking to copy the files from c:\azagent\r1\_work\<artifact folder> to e:\shares\<artifact name>. This is the part that I cannot figure out how to make work.
What strategy could I use to put this together? I've looked into the documentation but it seems like this is somewhat of an edge case (not deploying an app or web site, etc). Ideally, I'd love to do this in a multi-stage YAML pipeline - but from what I've read, it appears as if these do not yet support Deployment Groups. So a classic pipeline is fine for now.
You can add a copy file task(Click the plus sign(+) on the agent job and search for copy files) in your release pipeline to copy the files to a different place on your local machine.
Then you can specify the source folder(ie. $(System.DefaultWorkingDirectory)), and the contents to copy and the target folder(ie. e:\shares\). In below example all contents in $(System.DefaultWorkingDirectory)(ie. C:\agent\_work\r1\a) will be copied to folder D:\Test\New folder
Please check the prefined variables for more information about its map to the local folders.
The problem
In PhpStorm I have a style.css- and a app.js-file that I have to upload to a server over and over again. I'm trying to automate it.
They're compiled by Webpack, so they are generated/compiled. Which means that I can't simply use the 'Tools' >> 'Deployment' >> 'Upload to...' (since that file isn't and won't every be open).
What I currently do
At the moment, every time I want to see the changed I've done, then I do this (for each file):
Navigate to the files in the file-tree (using the mouse)
Select it
The I've set up a shortcut for Main menu >> Tools >> Deployment >> Upload to..., where-after I select the server I want to upload to.
I do this approximately 100+ times per day.
The ideal solution
The ideal solution would be, that if I pressed a shortcut like CMD + Option + Shift + G
That it then uploaded a selection of files (a scope?) to a predefined remote server.
Solution attempts
Open and upload.
Changing to those files (using CMD + p) and then uploading them (once they're open). But the files are generated, which means that it takes PhpStorm a couple of seconds to render the content (which is necessary before I can do anything with the file) - so that's not faster.
Macro.
Recording a macro, uploading the two files, looking like this:
If I go to the menu and trigger the Macro, then it works. So far so good.
But if I assign a shortcut key and trigger that shortcut while in a file, then it shows me this:
And if I press '1' (for it to upload to number 1 on the list), then it uploads the file that I'm currently in(!?), and not the two files from my macro.
I've tried several different shortcuts (to rule out some kind of keyboard-shortcut-clash):
CMD + Option + CTRL + 0
CMD + Shift 0
CMD + ;
... Same result.
And the PhpStorm Macro's doesn't seem to give me that many options anyways.
Keyboard Maestro.
I've tried doing it using Keyboard Maestro.
But I can't get it setup right. Because if it can't find the folders (if they're off-screen or if I'm in a different project and forgot to adjust they shortcuts), then it blasts through the rest of the recorded actions, resulting in chaos. Ideally it should stop, if it can't find the file on the screen.
Update1 - External program
Even if it's not possible to do in PhpStorm, - are there then another program that I could achieve this with?
Update2 - Automatic Deployment in PhpStorm
I've previously used this, - but I've had happen a few times that I started sync'ing waaaay to many files, overwriting critical core files. It seems smart, but can possibly tear down walls if I've forgotten to define an ignore properly.
I wish there was an 'Automatic Deployment for theses files'-function.
Update3 - File Watchers
I looked into file-watchers ( recommendation from #LazyOne ). Based on this forum thread, then file watchers cannot be used to upload files.
It is possible to accomplish it using external program scp (Secure Copy Protocol):
Steps:
1. Create a Scope (for compiled files app.js and style.css)
2. Create a Custom File Watcher with scp over that Scope
Start with Scope:
Create a Local Scope with name scp files for your compiled files directory (I will assume that your webpack compiles into dist directory):
Then, to add dist directory into Scope, select that folder and click on Include Recursively. Apply and Move to File Watchers
Create a custom template for File Watcher:
Choose a Name
Choose File type as Any
Choose Scope as scp files(created earlier)
Choose Program as scp
Choose Arguments as $FileName$ REMOTE_USER#REMOTE_HOST:/REMOTE_DIR_PATH/$FileName$
Choose Working directory as $FileDir$
That's it, basically what we have done is every time when a file in that scope changes, that file is copied with scp to the remote server to the corresponding path.
Voila. Apply Everything and recompile your project and you will see that everything is uploaded to the server.
(I assumed that you have already set up your ssh client; Generated public/private keys; Added a public key in your remote server; And, know ssh credentials to connect to your remote server)
I figured this out myself. I posted the answer here.
The two questions are kind of similar but not identical.
This way I found is also not the best, since it stores the server password in clean text. So I'll leave the question open, in case someone can come up with a better way to achieve this.
I am a beginner attempting to learn SQL with Zed Shaw's "How to Learn SQL the Hard Way"
In excercise 0: The Set up, he states:
Then, look to see that the test.db file is there. If that works then you're all set.
But when I run the command,
sqlite> create table test (id);
sqlite> .quit
the execution runs, but it doesn't create a test.db file. I looked in the same folder as where the sqlite3.exe file is and I see nothing.
I attempt to see if I can continue without this step, then - In his next exercise, "Excercise 1: Creating Tables":
I input his commands, but when attempting to run sqlite3 ex1.db < ex1.sql, it gives me an error.
I even tried putting the create table command and saving it as a '.sql' file into the same folder as sqlite3.exe.
How can I set this environment up properly? Can someone explain this on an "easy to grasp" level? Any response is appreciated**
Edit 1
I'm not exactly sure how Zed Shaw how he wants his learners to use SQLite 3, Maybe I can go into some research but I just don't understand why he leaves such a large gap of assumption that everyone knows what to do for the set up process...
I had this exact problem. I am using windows 7.
From this website: http://www.sqlite.org/download.html
I used the:
"sqlite-tools-win32-x86-3130000.zip
(1.51 MiB) A bundle of command-line tools for managing SQLite..."
under the "Precompiled Binaries for Windows" heading.
My helper directed me to open the file named "sqlite3" after I'd extracted the files from the zip file. It brought up a black window, in the command line style, showing some text, and then the familiar sqlite> prompt.
Then he had me input:" .save data_base_name.db " (I chose to have my database be named 'thedatabase'.)
which created a file in the same folder as the sqlite3 file, called "data_base_name" as a "Data Base File".
That's where I'm at so far, I'll post updates as I have them.
I would like to keep two versions of a static html file in my git repository. Both are basically identical, except for links for scripts, media etc (dev version vs. live version).
Right now I keep the dev version in repo, and overwrite the live version values manually on the live machine (=I have local git changes there). I am not happy with this setup, because there's manual labour for each push/pull.
What is the best flow for managing files that cannot be split into config/rest sections (like HTML)?
You could...
Remove the file from your repository and just manually populate it. If it doesn't change very often, this works just fine.
Remove the file from your repository, and generate it from a template via a post-merge script in .git/hooks/post-merge (this hook is run, for example, after git pull).
Name the file after the branch or hostname or some other variable (e.g., static.master.html vs. static.develop.html, etc) and dynamically determine which one to use at runtime.
Those are some ideas. I imagine other folks will contribute additional suggestions.
Expanding on the 2nd bullet point by larsks:
You could keep two copies in the repo (say it were your homepage) index.dev.html and index.prod.html. On the remote, your post-merge script could do something like:
cp -a index.prod.html index.html
or
truncate -s 0 index.html
cat index.prod.html >> index.html
Another problem beside renaming is to keep the content of the both files in sync. So having dedicated files for the same reason only differing in one minor path is a lot of redundncy, if you change one, you have to think on updating the other as well.
OK, you stated that the HTML file is static, but here a line of PHP to generate the difference would solve our problem
Achim