windows batch file scripting - keep formatting in html - html

I'm currently trying to write a simple batch script that will run a series of windows commands and output them nicely into a single html file.
the command im running for example, is netsh firewall show config, Looks something like this:
Domain profile configuration (current):
-------------------------------------------------------------------
Operational mode = Enable
Exception mode = Enable
Multicast/broadcast response mode = Enable
Notification mode = Enable
Service configuration for Domain profile:
Mode Customized Name
Notice the nice spacing used that makes it easy to see?
I'm trying to keep that in the HTML file that i create. So far, i have a simple batch script as follows:
#echo off
echo ^<font face="courier new" size="2"^>^ >>index.html
netsh firewall show config >> netsh
type netsh >>index.html
When that spools to an index.html file, the resultant output looks something like this:
Domain profile configuration (current): -----------------------------------------------
-------------------- Operational mode = Enable Exception mode = Enable
Multicast/broadcast response mode = Enable Notification mode = Enable Service
configuration for Domain profile: Mode Customized Name ------------------------------
------------------------------------ Enable No File and Printer Sharing Allowed\
programs configuration for Domain profile: Mode Traffic
So does anyone know how i can output the command in a nice way, such that the formatting is kept? Its really just a bunch of spaces and breaks.
Appreciate your help in advance!

Use the <pre> tag to retain the text layout:
#echo off
echo ^<pre^> >>index.html
netsh firewall show config >> index.html
echo ^</pre^> >>index.html

Related

400-unknown or invalid client_id for forge-bim360-data.connector.dashboard

I have tried to implement - https://github.com/Autodesk-Forge/forge-bim360-data.connector.dashboard
I have updated this part - npm install set FORGE_CLIENT_ID=<<YOUR CLIENT ID FROM DEVELOPER PORTAL>> set FORGE_CLIENT_SECRET=<<YOUR CLIENT SECRET>> set FORGE_CALLBACK_URL=<<your callback url of Forge e.g. http://localhost:3000/oauth/callback>> set DC_CALLBACK_URL=<<"your ngrok address here: e.g. http://abcd1234.ngrok.io/job/callback">>
I am getting the error that 400-Unknown or invalid client_id
Firstly, I rarely used Windows OS now. I simply copied the guideline of setting environment variables from other samples, while most time, I tried with debug mode (setting environment variables in launch.json) .
checking the Readme again, I found the wording is:
Windows (use Node.js command line from Start menu)
i.e. it asks to input those commands to command line of Node.js, instead of terminal of VSCode! That is why it always reports the error of client id is not defined because the variables are not set to environment at all.
The correct way is to open the command line of Node.js, and run the commands. This is a screenshot.

Hyperledger Composer CLI Ping to a Business Network returns AccessException

Im trying to learn Hyperledger Composer but seems to be a relatively new technology, i mean there are few tutorials and few solutions to a lot of questions, tutorial does not mention possible error case when following the commands and which means there are is also no solution for those errors.
I have joined the composer channel in their community chat, looks like its running in Discord or something, and asked the same question without a response, i have a better experience here in SO.
This is the problem: I have deployed my business network, installed it, started it, created my network admin card and imported it, then to test if everything is ok i have to command composer network ping --card NAME-OF-MY-ADMIN-CARD
And this error comes:
juan#JuanDeDios:~/proyectos/inovacion/a3-poliza-microservice$ composer network ping --card admin#a3-policy-microservice
Error: transaction returned with failure: AccessException: Participant 'org.hyperledger.composer.system.NetworkAdmin#admin' does not have 'READ' access to resource 'org.hyperledger.composer.system.Network#a3-policy-microservice#0.0.1'
Command failed
I think that it has to do something with the permission.acl file, and gave permission to everyone to everything so there would not be any restrictions to anyone, and tryied again, but failed.
So i thought i had to uninstall my business network and create it again, i deleted my .bna and my network.card files also so everything would be created again, but the same error result.
My other attempt was to update the business network, but didn't work, the same error happened and I'm sure i didn't miss any step from the tutorial. I do also followed the playground tutorial. What i have not done its to create another app with the Yeoman but i will do if i don't find a solution to this problem which would not require me to create another app.
This were my steps:
1-. Created my app with Yeoman
yo hyperledger-composer:businessnetwork
2-. Selected Apache-2.0 for my license
3-. Created a3-policy-microservice as the name of the business network
4-. Created org.microservice.policy (Yeah i switched names but Im totally aware)
5-. Generated my app with a template selecting the NO option
6-. Created my assets, participants and transactions
7-. Changed my permission rules to mine
8-. I generated the .bna file
composer archive create -t dir -n .
9-. Then installed my bna file
composer network install --card PeerAdmin#hlfv1 --archiveFile a3-policy-microservice#0.0.1.bna
10-. Then started my network and created my networkadmin card
composer network start --networkName a3-policy-network --networkVersion 0.0.1 --networkAdmin admin --networkAdminEnrollSecret adminpw --card PeerAdmin#hlfv1 --file networkadmin.card
11-. Imported my card
composer card import --file networkadmin.card
12-. Tried to ping my network
composer network ping --card admin#a3-poliza-microservice
And the error happens
Later i tried to create everything again shutting down my fabric and started it again and creating the network from the first step.
My other attempt was to change the permissions and upgrade my bna network, but it failed too. Im running out of options
Hope this description its not too long to ignore it. Thanks in advance
thanks for the question!
First possibility is that your network name is a3-policy-network but you're pinging a network called a3-poliza-microservice - once you do get the correct ACLs in place (currently, that's the error you're trying to resolve).
The procedure for upgrade would normally be the procedure below:
After your step 12 (where you can't ping the business network due to restrictive ACL conditions, assuming you are using the right network name) you would have:
Make the changes to to include your System ACLs this time eg.
/**
* Sample access control list.
*/
rule SystemACL {
description: "System ACL to permit all access"
participant: "org.hyperledger.composer.system.Participant"
operation: ALL
resource: "org.hyperledger.composer.system.**"
action: ALLOW
}
rule NetworkAdminUser {
description: "Grant business network administrators full access to user resources"
participant: "org.hyperledger.composer.system.NetworkAdmin"
operation: ALL
resource: "**"
action: ALLOW
}
rule NetworkAdminSystem {
description: "Grant business network administrators full access to system resources"
participant: "org.hyperledger.composer.system.NetworkAdmin"
operation: ALL
resource: "org.hyperledger.composer.system.**"
action: ALLOW
}
Update the "version" field in your existing package.json in your Business Network project directory (ie need to change it next increment - eg. update the version property from 0.0.1 to 0.0.2.)
From the same directory, run the following command:
composer archive create --sourceType dir --sourceName . -a a3-policy-network#0.0.2.bna
Now install the new business network code firstly:
composer network install --card PeerAdmin#hlfv1 --archiveFile a3-policy-network#0.0.2.bna
Then perform the requisite upgrade step (single '-' for short form of the parameter):
composer network upgrade -c PeerAdmin#hlfv1 -n a3-policy-network -V 0.0.2
After a few seconds, ping the network again to see ACL changes are now in effect:
composer network ping -c a3-policy-network

convert VMX to OVF using OVFtool

I am trying to convert VMX to OVF format using OVFTool as below, however it gives error:
C:\Program Files\VMware\VMware OVF Tool>ovftool.exe
vi://vcenter.com:port/folder/myfolder/abc.vmx abc.ovf
Error: Failed to open file: https://vcenter.com:port/folder/myfolder/abc.vmx
Completed with errors
Please let me know if you have any solution.
I had a similar situation in vmware fusion trying to use a .vmx that was probably created on windows. I could boot the VM, but any attempt to export the machine with ovftool or use vmware-vdiskmanager bombed out with:
Error: Failed to open disk: source.vmdk
Completed with errors
the diskname was totally valid, path was valid, permissions were valid, and the only clue was running ovftool with:
ovftool --X:logToConsole --X:logLevel=verbose source.vmx dest.ova
Opening VMX source: source.vmx
verbose -[10C2513C0] Opening source
verbose -[10C2513C0] Failed to open disk: ./source.vmdk
verbose -[10C2513C0] Exception: Failed to open disk: source.vmdk. Reason: Disk encoding error
Error: Failed to open disk: source.vmdk
as others suggested, i took a peek in the .vmdk. therein i found 3 other clues:
encoding="windows-1252"
createType="monolithicSparse"
# Extent description
RW 16777216 SPARSE "source.vmdk"
so first i converted the monolithicSparse vmdk to "preallocated virtual disk split in 2GB files":
vmware-vdiskmanager -r source.vmdk -t3 foo.vmdk
then i could edit the "foo.vmdk" to change the encoding, which now looks like:
encoding="utf-8"
createType="twoGbMaxExtentFlat"
# Extent description
RW 8323072 FLAT "foo-f001.vmdk" 0
RW 8323072 FLAT "foo-f002.vmdk" 0
RW 131072 FLAT "foo-f003.vmdk" 0
and finally, after fixing up the source.vmx:
scsi0:0.fileName = "foo.vmdk"
profit:
ovftool source.vmx dest.ova
...
Opening VMX source: source.vmx
Opening OVA target: dest.ova
Writing OVA package: dest.ova
Transfer Completed
Completed successfully
I had a similar problem with OVFTool trying to export to OVF format.
Export failed: Failed to open file: C:\Virtual\test\test.vmx.
First, I opened .VMX file in editor (it's a text file) and made sure that settings like
scsi0:0.fileName = "test.vmdk"
nvram = "test.nvram"
extendedConfigFile = "test.vmxf"
mention proper file names.
Then I noticed this line:
.encoding = "windows-1251"
This is Cyrillic code page, so I modified it to use Western code page
.encoding = "windows-1252"
Then, running OVFTool gave a different error
Export failed: Failed to open disk: test.vmdk.
To fix it I had to open .VMDK file in HEX editor (because it's usually a big binary file), found there the string
encoding = "windows-1251"
(it's somewhere in the beginning of the file), and replaced "1251" with "1252".
And it did the trick!
In my case, was needed repair the disk 'abc.vmdk' before convert the 'abc.vmx' to 'abc.ovf'.
Use this for Linux:
$ /usr/bin/vmware-vdiskmanager -R /home/user/VMware/abc.vmdk
Look this link https://kb.vmware.com/s/article/2019259 for resolved issue in Windows and Linux
Try to run as described below.
C:\Program Files\VMware\VMware OVF Tool>ovftool C:\Win-Test\Win-Test.vmx(location of your vmx file) C:\Win-Test\win-test.ovf (destination)
Maybe ovftool is unable to recognize the path you are giving.
Try with following command:
ovftool --eula#=[path to eula] --X:logToConsole --targetType=OVA --compress=9 vi://[username]:[ESX address] [target address]
Once you provide the ESX address, it will list down the folders you have created in your ESX box. Then you can trigger the command above mentioned again with appending folder name.
If no folder hierarchy present in your box, then it will simply list down vm names.
Retry the same command appending [foldername]/[vmname no vmx file name required]
ovftool --eula#=[path to eula] --X:logToConsole --targetType=OVA --compress=9 vi://[username]:[ESX address]/[foldername if exist]/[vmname no vmx file name required] [target address]
I had this same exact issue. In my case I opened up the VMX file and dropped the IDE and sound controllers from the file and saved. I was then able to convert everything to an OVA using the tool with the standard syntax.
e.g. I dropped:
ide1:0.present = "TRUE"
ide1:0.deviceType = "cdrom-image"
and:
sound.present = "TRUE"
sound.fileName = "-1"
sound.autodetect = "TRUE"
This allowed me to convert the file like normal.
For me opening the .vmx and deleting the following line worked:
sata0:1.deviceType = "cdrom-image"
In my case, this works:
ide1:0.present = "TRUE"
ide1:0.deviceType = "cdrom-image"
I did change true to false and works fine, as cdrom-image not exist, this change permit the format conversion.
if your goal is to move a windows based vm to virtual box you only need to:
uninstall vmware tools from the guest vm
shut down the machine
copy the hd to a new folder
create a new empty vm in virtualbox
mount the hd (the .vmdk file) in that vm
Easy and rapid to do.

How do I set user level flags for Grid Engine bsub command?

I am running GridEngine (GE 6.2u5) jobs from a command line. For example,
qsub echo "Hello"
But I get this error,
Unable to read script file because of error: error opening echo: No such file or directory
The workaround is easy, use the -b y flag. I'd like to create an SGE properties file in my home directory which will set '-y' to be the default. How do I do this?
If you want to add your option, you can edit the file "sge_request". It allows you to set the default options that will be added to any requests you will submit.
This file is situated in : SGE_ROOT/CELL_NAME/common/sge_request
For more information, check the documentation : http://gridscheduler.sourceforge.net/htmlman/htmlman5/sge_request.html

How to enable gzip HTTP compression on Windows Azure dynamic content

I've been trying unsuccessfully to enable gzip HTTP compression on my Windows Azure hosted WCF Restful service which returns JSON only from GET and POST requests.
I have tried so many things that I would have a hard time listing all of them, and I now realise I have been working with conflicting information (regarding old version of azure etc) so think it best to start with a clean slate!
I am working with Visual Studio 2008, using the February 2010 tools for Visual Studio.
So, according to the following link..
.. HTTP compression has now been enabled. I've used the advice at the following page (the URL compression advice only)..
http://blog.smarx.com/posts/iis-compression-in-windows-azure
<urlCompression doStaticCompression="true"
doDynamicCompression="true"
dynamicCompressionBeforeCache="true"
/>
.. but I get no compression. It doesn't help that I don't know what the difference is between urlCompression and httpCompression. I've tried to find out but to no avail!
Could, the fact that the tools for Visual Studio were released before the version of Azure which supports compression, be a problem? I have read somewhere that, with the latest tools, you can choose which version of Azure OS you want to use when you publish ... but I don't know if that's true, and if it is, I can't find where to choose. Could I be using a pre-http enabled version?
I've also tried blowery http compression module, but no results.
Does any one have any up-to-date advice on how to achieve this? i.e. advice that relates to the current version of the Azure OS.
Cheers!
Steven
Update: I edited the above code to fix a type in the web.config snippet.
Update 2: Testing the responses using the whatsmyip URL shown in the answer below is showing that my JSON responses from my service.svc are being returned without any compression, but static HTML pages ARE being returned with gzip compression. Any advice on how to get the JSON responses to compress will be gratefully received!
Update 3: Tried a JSON response larger than 256KB to see if the problem was due to the JSON response being smaller than this as mentioned in comments below. Unfortunately the response is still un-compressed.
Well it took a very long time ... but I have finally solved this, and I want to post the answer for anyone else who is struggling. The solution is very simple and I've verified that it does definitely work!!
Edit your ServiceDefinition.csdef file to contain this in the WebRole tag:
<Startup>
<Task commandLine="EnableCompression.cmd" executionContext="elevated" taskType="simple"></Task>
</Startup>
In your web-role, create a text file and save it as "EnableCompression.cmd"
EnableCompression.cmd should contain this:
%windir%\system32\inetsrv\appcmd set config /section:urlCompression /doDynamicCompression:True /commit:apphost
%windir%\system32\inetsrv\appcmd set config -section:system.webServer/httpCompression /+"dynamicTypes.[mimeType='application/json; charset=utf-8',enabled='True']" /commit:apphost
.. and that's it! Done! This enables dynamic compression for the json returned by the web-role, which I think I read somewhere has a rather odd mime type, so make sure you copy the code exactly.
Well at least I'm not alone on this one - and it's still a stupid PITA almost a year later.
The problem is a MIME type mismatch. WCF returns JSON response with Content-Type: application/json; charset=UTF-8. The default IIS configuration, about halfway down that page, does not include that as a compressible MIME type.
Now, it might be tempting to add an <httpCompression> section to your web.config, and add application/json to that. But that's just a bad way to waste a good hour or two - you can only change the <httpCompression> element at the applicationHost.config level.
So there are two possible solutions. First, you could change your WCF response to use a MIME type that is compressible in the default configuration. text/json will work so adding this to your service method(s) will give you dynamic compression: WebOperationContext.Current.OutgoingResponse.ContentType = "text/json";
Alternatively, you could change the applicationHost.config file using appcmd and a startup task. This is discussed (among other things) on this thread. Note that if you add that startup task and run it in the dev fabric, it will work once. The second time it will fail because you already added the configuration element. I ended up creating a second cloud project with a separate csdef file, so that my devfabric would not run that startup script. There are probably other solutions though.
Update
My suggestion for separate projects in the previous paragraph is not really a good idea. Non-idempotent startup tasks are a very bad idea, because some day the Azure fabric will decide to restart your roles for you, the startup task will fail, and it'll go into a recycle loop. Most likely in the middle of the night. Instead, make your startup tasks idempotent as discussed on this SO thread.
To deal with local development fabric having issues after first deploy, I added the appropriate commands to the CMD file to reset config. In addition, I'm setting compression level here specifically, since it appears to default to zero in some (all?) cases.
REM Remove old settings - keeps local deploys working (since you get errors otherwise)
%windir%\system32\inetsrv\appcmd reset config -section:urlCompression
%windir%\system32\inetsrv\appcmd reset config -section:system.webServer/httpCompression
REM urlCompression - is this needed?
%windir%\system32\inetsrv\appcmd set config -section:urlCompression /doDynamicCompression:True /commit:apphost
REM Enable json mime type
%windir%\system32\inetsrv\appcmd set config -section:system.webServer/httpCompression /+"dynamicTypes.[mimeType='application/json; charset=utf-8',enabled='True']" /commit:apphost
REM IIS Defaults
%windir%\system32\inetsrv\appcmd set config -section:system.webServer/httpCompression /+"dynamicTypes.[mimeType='text/*',enabled='True']" /commit:apphost
%windir%\system32\inetsrv\appcmd set config -section:system.webServer/httpCompression /+"dynamicTypes.[mimeType='message/*',enabled='True']" /commit:apphost
%windir%\system32\inetsrv\appcmd set config -section:system.webServer/httpCompression /+"dynamicTypes.[mimeType='application/x-javascript',enabled='True']" /commit:apphost
%windir%\system32\inetsrv\appcmd set config -section:system.webServer/httpCompression /+"dynamicTypes.[mimeType='*/*',enabled='False']" /commit:apphost
%windir%\system32\inetsrv\appcmd set config -section:system.webServer/httpCompression /+"staticTypes.[mimeType='text/*',enabled='True']" /commit:apphost
%windir%\system32\inetsrv\appcmd set config -section:system.webServer/httpCompression /+"staticTypes.[mimeType='message/*',enabled='True']" /commit:apphost
%windir%\system32\inetsrv\appcmd set config -section:system.webServer/httpCompression /+"staticTypes.[mimeType='application/javascript',enabled='True']" /commit:apphost
%windir%\system32\inetsrv\appcmd set config -section:system.webServer/httpCompression /+"staticTypes.[mimeType='*/*',enabled='False']" /commit:apphost
REM Set dynamic compression level to appropriate level. Note gzip will already be present because of reset above, but compression level will be zero after reset.
%windir%\system32\inetsrv\appcmd.exe set config -section:system.webServer/httpCompression /+"[name='deflate',doStaticCompression='True',doDynamicCompression='True',dynamicCompressionLevel='7',dll='%%Windir%%\system32\inetsrv\gzip.dll']" /commit:apphost
%windir%\system32\inetsrv\appcmd.exe set config -section:system.webServer/httpCompression -[name='gzip'].dynamicCompressionLevel:7 /commit:apphost
This article from MS is their how to script for JSON http://msdn.microsoft.com/en-us/library/windowsazure/hh974418.aspx.
It deals with many of the issues mentioned e.g. being able to handle Azure recycle etc
Just had an issue with this regarding the error type 183, and I found a solution. So if anybody else is experiencing this here goes:
Here's the error I got:
User program "F:\approot\bin\EnableCompression.cmd" exited with non-zero exit code 183. Working Directory is F:\approot\bin.
And here's the code that fixed it for me:
REM *** Add a compression section to the Web.config file. ***
%windir%\system32\inetsrv\appcmd set config /section:urlCompression /doDynamicCompression:True /commit:apphost >> "%TEMP%\StartupLog.txt" 2>&1
REM ERRORLEVEL 183 occurs when trying to add a section that already exists. This error is expected if this
REM batch file were executed twice. This can occur and must be accounted for in a Windows Azure startup
REM task. To handle this situation, set the ERRORLEVEL to zero by using the Verify command. The Verify
REM command will safely set the ERRORLEVEL to zero.
IF %ERRORLEVEL% EQU 183 DO VERIFY > NUL
REM If the ERRORLEVEL is not zero at this point, some other error occurred.
IF %ERRORLEVEL% NEQ 0 (
ECHO Error adding a compression section to the Web.config file. >> "%TEMP%\StartupLog.txt" 2>&1
GOTO ErrorExit
)
REM *** Add compression for json. ***
%windir%\system32\inetsrv\appcmd set config -section:system.webServer/httpCompression /+"dynamicTypes.[mimeType='application/json; charset=utf-8',enabled='True']" /commit:apphost >> "%TEMP%\StartupLog.txt" 2>&1
IF %ERRORLEVEL% EQU 183 VERIFY > NUL
IF %ERRORLEVEL% NEQ 0 (
ECHO Error adding the JSON compression type to the Web.config file. >> "%TEMP%\StartupLog.txt" 2>&1
GOTO ErrorExit
)
REM *** Exit batch file. ***
EXIT /b 0
REM *** Log error and exit ***
:ErrorExit
REM Report the date, time, and ERRORLEVEL of the error.
DATE /T >> "%TEMP%\StartupLog.txt" 2>&1
TIME /T >> "%TEMP%\StartupLog.txt" 2>&1
ECHO An error occurred during startup. ERRORLEVEL = %ERRORLEVEL% >> "%TEMP%\StartupLog.txt" 2>&1
EXIT %ERRORLEVEL%
Solution found at http://msdn.microsoft.com/en-us/library/azure/hh974418.aspx
Yes, you can choose the OS you want, but by default, you'll get the latest.
Compression is tricky. There are lots of things that can go wrong. Are you by chance doing this testing behind a proxy server? I believe IIS by default doesn't send compressed content to proxies. I found a handy tool to test whether compression is working when I was playing with this: http://www.whatsmyip.org/http_compression/.
It looks like you have doDynamicCompression="false"... is that just a typo? You want that to be on if you're going to get compression on JSON you return from a web service.