First of all, let me list some versions so you know the setup.
OS: Windows 10
php: 7.2.7 NTS with xDebug 2.6.1 active
PhpStorm: 2016.2.2
PHP_CodeSniffer: version 3.4.0 (stable) by Squiz (http://www.squiz.net)
PEAR: 1.10.7
Now let me describe the problem:
The code sniffer was installed via pear. I'm using the following bat script for starting the sniffer.
#echo off
set folder=C:\Program Files\php
set phpcs=%folder%\phpcs
php "%phpcs%" %*
If I start the code sniffer via the PowerShell with the following command:
phpcs.bat index.php --standard=PSR2 --encoding=utf-8 --report=xml
I'm getting a valid output:
xml version="1.0" encoding="UTF-8"?>
<file name="C:\Users\simon\Documents\Repositories\mm-BIT\CatalogGenerator\index.php" errors="3" warnings="0" fixable="3">
<error line="1" column="1" source="Generic.Files.LineEndings.InvalidEOLChar" severity="5" fixable="1">End of line character is invalid; expected "\n" but found "\r\n"</error>
<error line="124" column="1" source="PSR2.Methods.FunctionCallSignature.SpaceBeforeOpenBracket" severity="5" fixable="1">Space before opening parenthesis of function call prohibited</error>
<error line="129" column="1" source="PSR2.Methods.FunctionCallSignature.SpaceBeforeOpenBracket" severity="5" fixable="1">Space before opening parenthesis of function call prohibited</error>
</file>
</phpcs>
In PhpStorm the setting looks like this:
If I validate the installation PhpStorm tells me everything is fine:
The data for the coding standards on the Inspections page was loaded automatically, so this seems to work as well.
If PhpStorm is running the script I'm getting the following error:
PHP Code Sniffer
phpcs: xml version="1.0" encoding="UTF-8"?>
I was exposing some data via the script which is getting called:
PHP Code Sniffer
phpcs: C:/temp/___1.tmp/Core/DataContainers/Language.php --standard=PSR2 --encoding=utf-8 --report=xml
command: php "C:\Program Files\php\phpcs" C:/temp/___1.tmp/Core/DataContainers/Language.php --standard=PSR2 --encoding=utf-8 --report=xml
xml version="1.0" encoding="UTF-8"?>
I did check the temp folder for writing permissions and was checking if the file is created correctly on the path mentioned above. I did copy the folder as soon as it was created and did run the command line by hand in the PowerShell successfully.
php "C:\Program Files\php\phpcs"
C:/temp/___.tmp/Core/DataContainers/Modules/SimpleTableModule.php --
standard=PSR2 --encoding=utf-8 --report=xml
which delivers the following output:
<?xml version="1.0" encoding="UTF-8"?>
<phpcs version="3.4.0">
xml version="1.0" encoding="UTF-8"?>
<file name="C:\temp\___.tmp\Core\DataContainers\Modules\SimpleTableModule.php" errors="1" warnings="1" fixable="1">
<warning line="82" column="114" source="Generic.Files.LineLength.TooLong" severity="5" fixable="0">Line exceeds 120 characters; contains 123 characters</warning>
<error line="106" column="1" source="PSR2.Files.EndFileNewline.NoneFound" severity="5" fixable="1">Expected 1 newline at end of file; 0 found</error>
</file>
</phpcs>
I don't know how to fix this if you could provide me with some ideas I would be really happy.
The problem was solved, I did replace the content of the phpcs.bat file with the official script:
#echo off
REM PHP_CodeSniffer detects violations of a defined coding standard.
REM
REM #author Greg Sherwood <gsherwood#squiz.net>
REM #copyright 2006-2015 Squiz Pty Ltd (ABN 77 084 670 600)
REM #license https://github.com/squizlabs/PHP_CodeSniffer/blob/master/licence.txt BSD Licence
if "%PHP_PEAR_PHP_BIN%" neq "" (
set PHPBIN=%PHP_PEAR_PHP_BIN%
) else set PHPBIN=php
"%PHPBIN%" "%~dp0\phpcs" %*
Script in the repository
I was trying this before but it was not working, I was giving myself full access rights on the PHP installation folder as well. Seems like the problem is fixed now.
Still a big thanks to everyone taking a look.
Edit: I checked my bat file again and it was not completely identical with the one from the git repository. I did leave my old code as a comment in the file. After I cleaned this up this morning the sniffer wasn't working anymore, after I readded the comments again the feature was working again. So here is the complete content of the file in the current working state:
#echo off
REM PHP_CodeSniffer detects violations of a defined coding standard.
REM
REM #author Greg Sherwood <gsherwood#squiz.net>
REM #copyright 2006-2015 Squiz Pty Ltd (ABN 77 084 670 600)
REM #license https://github.com/squizlabs/PHP_CodeSniffer/blob/master/licence.txt BSD Licence
if "%PHP_PEAR_PHP_BIN%" neq "" (
set PHPBIN=%PHP_PEAR_PHP_BIN%
) else set PHPBIN=php
"%PHPBIN%" "%~dp0\phpcs" %*
REM End of file
We did confirm this behavior on another computer with the same installed software, so this seems to be the problem.
Edit2: Seems like you just need a comment line after the last line of the original script. I did update the code snippet I'm using at the moment.
Related
I managed to set up artifactory using our existing tomcat. I have set to ARTIFACTORY_HOME=/opt/artifactory, that part works well. There is, however, also the jfrog access.war file, which needs to be running as well. I didn't figure out which variable to use to specify its home, therefore it defaults to ~/.jfrog_access, which is not at all what I like.
I moved the content over to my $ARTIFACTORY_HOME/access and symlinked it, but that's not the way to go for sure. Any help appreciated.
In case someone is stumbling over this thread and struggles with the same problem:
Solution for me was to also extract the Context files (access.xml and artifactory.xml which are available in the zip file under <zip extract>/misc/tomcat) to the Tomcat configuration folder, e.g. $CATALINA_HOME/conf/Catalina/localhost/. After that the $ARTIFACTORY_HOME env will be recognized on Access startup.
A previous answer finally put me on the right track for solving this problem on Amazon Linux.
In addition to copying access.xml and artifactory.xml to ${catalina.home}/host/MY_HOSTNAME, I found that some other changes were needed.
I modified the docBase attributes in the XML context files because my server has multiple hostnames:
/usr/share/tomcat8/conf/Catalina/repo.mydomain.org/access.xml
<Context path="/access" docBase="${catalina.home}/host/repo.mydomain.org/access.war">
<Parameter name="jfrog.access.bundled" value="true" override="true"/>
<!-- enable annotations scanning of access jar files -->
<JarScanner scanClassPath="false">
<JarScanFilter defaultPluggabilityScan="false" pluggabilityScan="access*" defaultTldScan="false"/>
</JarScanner>
</Context>
/usr/share/tomcat8/conf/Catalina/repo.mydomain.org/artifactory.xml
<Context crossContext="true" path="/artifactory" docBase="${catalina.home}/host/repo.mydomain.org/artifactory.war">
</Context>
Important Note: In order to prevent the above two XML files from being deleted by Tomcat Manager during upgrades via Undeploy/Deploy WAR, make sure they are owned by root and not writable by the tomcat user:
chown root.root access.xml artifactory.xml
chmod 644 access.xml artifactory.xml
If you forget to do the above, you will likely end up missing these files, which will break the communication between the access and artifactory web applications, resulting in login failures ("Username or Password Are Incorrect"). In this case, these errors result from the lack of communication between the web applications, not a problem with the credentials themselves.
/usr/share/tomcat8/conf/Catalina/repo.mydomain.org/manager.xml
This gives me the ability to upload new versions of access.war and artifactory.war via https://repo.mydomain.org:8443/manager/html:
<Context docBase="${catalina.home}/webapps/manager" privileged="true" antiResourceLocking="false">
</Context>
Additionally, I created the following folder to serve as the artifactory.home:
sudo mkdir /usr/share/artifactory
sudo chown tomcat.tomcat /usr/share/artifactory
tomcat8.conf
Add (or modify) the following line:
JAVA_OPTS="-Dartifactory.home=/usr/share/artifactory -Djfrog.access.home=/usr/share/artifactory/access -Dartifactory.access.client.serverUrl.override=http://localhost:8080/access"
Note: The Access Client URL specified above must use localhost in order to avoid the Server HTTP parameter from being overwritten by Apache and its modules. For instance, if I use:
https://repo.mydomain.org/access/api/v1/system/ping
The Server HTTP header value in the response is:
Server: Apache/2.4.33 (Amazon) OpenSSL/1.0.2k-fips mod_jk/1.2.43
And the Access Client produces the following exception:
[ERROR] (o.j.a.c.AccessClientImpl:154) - Access client/server version mismatch. Client version: 4.1.5, Server version: 2.4.33 (Amazon) OpenSSL
Which means the Access Client is depending on the first string matching #.#.# in the server header. This seems like a really fragile part of the Access Client. They should have used X-JFrog-Access-Server or something instead of trying to control a value that is set by the web server. So, to reiterate, use http://localhost:8080/access to connect directly to the tomcat server.
Artifactory 6.2.0 depends on Apache Derby (the specific version can be found in jfrog-artifactory-oss-6.2.0.zip\artifactory-oss-6.2.0\tomcat\lib). This should be added as a shared library to Tomcat:
mkdir /usr/share/tomcat8/shared
cd /usr/share/tomcat8/shared
wget http://central.maven.org/maven2/org/apache/derby/derby/10.11.1.1/derby-10.11.1.1.jar
Add or modify the following line in catalina.properties:
shared.loader=${catalina.home}/shared/*.jar
Since we want https://repo.mydomain.org to go to the Artifactory webapp:
mkdir /usr/share/tomcat8/host/repo.mydomain.org/ROOT
echo '<html><head><meta http-equiv="refresh" content="0;URL=/artifactory"></meta></head><body></body></html>' > /usr/share/tomcat8/host/repo.mydomain.org/ROOT/index.html
And make sure the services automatically start on reboot:
sudo chkconfig httpd on
sudo chkconfig tomcat8 on
Artifactory will then be available at the url:
https://repo.mydomain.org/artifactory/webapp/
EDIT
Event log error was this:
error 0x8007000B: The app manifest publisher name (CN=...)
must match the subject name of the signing certificate
(CN={19BE29DF-4812-4F2E-8FC1-A138B146946A}).
The command below now seems to work. So either user error on my part that I cannot identify or something hinky with the state of machine when I was seeing this. That guid associated with the signing cert in the event log message is not what the cert shows in the Certificate Manager snap-in, which is weird.
Original Question
I am attempting to sign a UWP appx package that was generated using MakeAppx.exe. The pfx is a developer code signing certificate generated with these commands from https://msdn.microsoft.com/windows/uwp/porting/desktop-to-uwp-manual-conversion.
C:\> MakeCert.exe -r -h 0 -n "CN=<publisher_name>" -eku 1.3.6.1.5.5.7.3.3 -pe -sv <my.pvk> <my.cer>
C:\> pvk2pfx.exe -pvk <my.pvk> -spc <my.cer> -pfx <my.pfx>
The private key is in my trusted root cert store and worked when I generated an appx from an installer using the Desktop App Converter.
The command line I am using is:
signtool.exe sign -f <path to my pfx file> -fd SHA256 -v .\FishTank.appx
but SignTool is erroring with this:
The following certificate was selected:
Issued to: ...
Issued by: ...
Expires: Sat Dec 31 18:59:59 2039
SHA1 hash: ...
Done Adding Additional Store
Error information: "Error: SignerSign() failed." (-2147024885/0x8007000b)
The certificate publisher matches what is in the appmanifest.xml
<?xml version="1.0" encoding="utf-8"?>
<Package
xmlns="http://schemas.microsoft.com/appx/manifest/foundation/windows10"
xmlns:uap="http://schemas.microsoft.com/appx/manifest/uap/windows10"
xmlns:rescap="http://schemas.microsoft.com/appx/manifest/foundation/windows10/restrictedcapabilities">
<Identity Name="..."
ProcessorArchitecture="x64"
Publisher="CN=..."
Version="1.1.0.0" />
<Properties>
<DisplayName>Fish Tank</DisplayName>
<PublisherDisplayName>Reserved</PublisherDisplayName>
<Description>Some fish. Swimming around on your screen.</Description>
<Logo>StoreLogo.png</Logo>
</Properties>
<Resources>
<Resource Language="en-us" />
</Resources>
<Dependencies>
<TargetDeviceFamily Name="Windows.Desktop" MinVersion="10.0.14316.0" MaxVersionTested="10.0.14316.0" />
</Dependencies>
<Capabilities>
<rescap:Capability Name="runFullTrust"/>
</Capabilities>
<Applications>
<Application Id="FishTank" Executable="FishTank.exe" EntryPoint="Windows.FullTrustApplication">
<uap:VisualElements
BackgroundColor="#464646"
DisplayName="Fish Tank"
Square150x150Logo="Square150x150Logo.png"
Square44x44Logo="Square44x44Logo.png"
Description="Some fish. Swimming around on your screen." />
</Application>
</Applications>
</Package>
Just like answered here (though for a different error code) - you have to make sure that the Publisher name (in the AppxManifest.xml file) is the same as the certificate's publisher.
For more information, see here (in the bottom "Remarks" section).
The MakeCert /n argument has to be the full Publisher string from your xml.
I am trying to convert VMX to OVF format using OVFTool as below, however it gives error:
C:\Program Files\VMware\VMware OVF Tool>ovftool.exe
vi://vcenter.com:port/folder/myfolder/abc.vmx abc.ovf
Error: Failed to open file: https://vcenter.com:port/folder/myfolder/abc.vmx
Completed with errors
Please let me know if you have any solution.
I had a similar situation in vmware fusion trying to use a .vmx that was probably created on windows. I could boot the VM, but any attempt to export the machine with ovftool or use vmware-vdiskmanager bombed out with:
Error: Failed to open disk: source.vmdk
Completed with errors
the diskname was totally valid, path was valid, permissions were valid, and the only clue was running ovftool with:
ovftool --X:logToConsole --X:logLevel=verbose source.vmx dest.ova
Opening VMX source: source.vmx
verbose -[10C2513C0] Opening source
verbose -[10C2513C0] Failed to open disk: ./source.vmdk
verbose -[10C2513C0] Exception: Failed to open disk: source.vmdk. Reason: Disk encoding error
Error: Failed to open disk: source.vmdk
as others suggested, i took a peek in the .vmdk. therein i found 3 other clues:
encoding="windows-1252"
createType="monolithicSparse"
# Extent description
RW 16777216 SPARSE "source.vmdk"
so first i converted the monolithicSparse vmdk to "preallocated virtual disk split in 2GB files":
vmware-vdiskmanager -r source.vmdk -t3 foo.vmdk
then i could edit the "foo.vmdk" to change the encoding, which now looks like:
encoding="utf-8"
createType="twoGbMaxExtentFlat"
# Extent description
RW 8323072 FLAT "foo-f001.vmdk" 0
RW 8323072 FLAT "foo-f002.vmdk" 0
RW 131072 FLAT "foo-f003.vmdk" 0
and finally, after fixing up the source.vmx:
scsi0:0.fileName = "foo.vmdk"
profit:
ovftool source.vmx dest.ova
...
Opening VMX source: source.vmx
Opening OVA target: dest.ova
Writing OVA package: dest.ova
Transfer Completed
Completed successfully
I had a similar problem with OVFTool trying to export to OVF format.
Export failed: Failed to open file: C:\Virtual\test\test.vmx.
First, I opened .VMX file in editor (it's a text file) and made sure that settings like
scsi0:0.fileName = "test.vmdk"
nvram = "test.nvram"
extendedConfigFile = "test.vmxf"
mention proper file names.
Then I noticed this line:
.encoding = "windows-1251"
This is Cyrillic code page, so I modified it to use Western code page
.encoding = "windows-1252"
Then, running OVFTool gave a different error
Export failed: Failed to open disk: test.vmdk.
To fix it I had to open .VMDK file in HEX editor (because it's usually a big binary file), found there the string
encoding = "windows-1251"
(it's somewhere in the beginning of the file), and replaced "1251" with "1252".
And it did the trick!
In my case, was needed repair the disk 'abc.vmdk' before convert the 'abc.vmx' to 'abc.ovf'.
Use this for Linux:
$ /usr/bin/vmware-vdiskmanager -R /home/user/VMware/abc.vmdk
Look this link https://kb.vmware.com/s/article/2019259 for resolved issue in Windows and Linux
Try to run as described below.
C:\Program Files\VMware\VMware OVF Tool>ovftool C:\Win-Test\Win-Test.vmx(location of your vmx file) C:\Win-Test\win-test.ovf (destination)
Maybe ovftool is unable to recognize the path you are giving.
Try with following command:
ovftool --eula#=[path to eula] --X:logToConsole --targetType=OVA --compress=9 vi://[username]:[ESX address] [target address]
Once you provide the ESX address, it will list down the folders you have created in your ESX box. Then you can trigger the command above mentioned again with appending folder name.
If no folder hierarchy present in your box, then it will simply list down vm names.
Retry the same command appending [foldername]/[vmname no vmx file name required]
ovftool --eula#=[path to eula] --X:logToConsole --targetType=OVA --compress=9 vi://[username]:[ESX address]/[foldername if exist]/[vmname no vmx file name required] [target address]
I had this same exact issue. In my case I opened up the VMX file and dropped the IDE and sound controllers from the file and saved. I was then able to convert everything to an OVA using the tool with the standard syntax.
e.g. I dropped:
ide1:0.present = "TRUE"
ide1:0.deviceType = "cdrom-image"
and:
sound.present = "TRUE"
sound.fileName = "-1"
sound.autodetect = "TRUE"
This allowed me to convert the file like normal.
For me opening the .vmx and deleting the following line worked:
sata0:1.deviceType = "cdrom-image"
In my case, this works:
ide1:0.present = "TRUE"
ide1:0.deviceType = "cdrom-image"
I did change true to false and works fine, as cdrom-image not exist, this change permit the format conversion.
if your goal is to move a windows based vm to virtual box you only need to:
uninstall vmware tools from the guest vm
shut down the machine
copy the hd to a new folder
create a new empty vm in virtualbox
mount the hd (the .vmdk file) in that vm
Easy and rapid to do.
When i try to source an sql file i get the error:
mysql> source C:/Users/tom/Documents/insert.sql
ERROR:
Failed to open file 'C:/Users/tom/Documents/insert.sql', error: 2
I have checked the file path, which looks fine to me. I have also tried \. C:/Users/etc
I am trying to source the sql file which holds insert statements for particular tables. All the statements in the file work when entered manually. What else could i be doing wrong?
Have tried using both backslash and forward slash when using this command
Probably a problem of access right on the file (the file is being accessed by the mysqld server process, not yourself). Try placing the file into the data folder of MySQL, then import it from this location. The location of data folder depends on your distribution and on your own configuration.
Alternatively, feed the SQL script directly to your mysql client's stdin:
mysql [all relevant options] your_database < C:\path\to\your\script.sql
I am using Ubuntu 14.04 version.
I too faced below error 2.
mysql> SOURCE home/loc/Downloads/AllTables.sql;
Failed to open file 'home/loc/Downloads/AllTables.sql', error: 2
Solution :
mysql> SOURCE /home/loc/Downloads/AllTables.sql;
Just added a '/' in front of home
Hope this helps some one.
Have you checked if the file exits? I have had this problem before.
This:
this:
and this works:
I've been trying unsuccessfully to enable gzip HTTP compression on my Windows Azure hosted WCF Restful service which returns JSON only from GET and POST requests.
I have tried so many things that I would have a hard time listing all of them, and I now realise I have been working with conflicting information (regarding old version of azure etc) so think it best to start with a clean slate!
I am working with Visual Studio 2008, using the February 2010 tools for Visual Studio.
So, according to the following link..
.. HTTP compression has now been enabled. I've used the advice at the following page (the URL compression advice only)..
http://blog.smarx.com/posts/iis-compression-in-windows-azure
<urlCompression doStaticCompression="true"
doDynamicCompression="true"
dynamicCompressionBeforeCache="true"
/>
.. but I get no compression. It doesn't help that I don't know what the difference is between urlCompression and httpCompression. I've tried to find out but to no avail!
Could, the fact that the tools for Visual Studio were released before the version of Azure which supports compression, be a problem? I have read somewhere that, with the latest tools, you can choose which version of Azure OS you want to use when you publish ... but I don't know if that's true, and if it is, I can't find where to choose. Could I be using a pre-http enabled version?
I've also tried blowery http compression module, but no results.
Does any one have any up-to-date advice on how to achieve this? i.e. advice that relates to the current version of the Azure OS.
Cheers!
Steven
Update: I edited the above code to fix a type in the web.config snippet.
Update 2: Testing the responses using the whatsmyip URL shown in the answer below is showing that my JSON responses from my service.svc are being returned without any compression, but static HTML pages ARE being returned with gzip compression. Any advice on how to get the JSON responses to compress will be gratefully received!
Update 3: Tried a JSON response larger than 256KB to see if the problem was due to the JSON response being smaller than this as mentioned in comments below. Unfortunately the response is still un-compressed.
Well it took a very long time ... but I have finally solved this, and I want to post the answer for anyone else who is struggling. The solution is very simple and I've verified that it does definitely work!!
Edit your ServiceDefinition.csdef file to contain this in the WebRole tag:
<Startup>
<Task commandLine="EnableCompression.cmd" executionContext="elevated" taskType="simple"></Task>
</Startup>
In your web-role, create a text file and save it as "EnableCompression.cmd"
EnableCompression.cmd should contain this:
%windir%\system32\inetsrv\appcmd set config /section:urlCompression /doDynamicCompression:True /commit:apphost
%windir%\system32\inetsrv\appcmd set config -section:system.webServer/httpCompression /+"dynamicTypes.[mimeType='application/json; charset=utf-8',enabled='True']" /commit:apphost
.. and that's it! Done! This enables dynamic compression for the json returned by the web-role, which I think I read somewhere has a rather odd mime type, so make sure you copy the code exactly.
Well at least I'm not alone on this one - and it's still a stupid PITA almost a year later.
The problem is a MIME type mismatch. WCF returns JSON response with Content-Type: application/json; charset=UTF-8. The default IIS configuration, about halfway down that page, does not include that as a compressible MIME type.
Now, it might be tempting to add an <httpCompression> section to your web.config, and add application/json to that. But that's just a bad way to waste a good hour or two - you can only change the <httpCompression> element at the applicationHost.config level.
So there are two possible solutions. First, you could change your WCF response to use a MIME type that is compressible in the default configuration. text/json will work so adding this to your service method(s) will give you dynamic compression: WebOperationContext.Current.OutgoingResponse.ContentType = "text/json";
Alternatively, you could change the applicationHost.config file using appcmd and a startup task. This is discussed (among other things) on this thread. Note that if you add that startup task and run it in the dev fabric, it will work once. The second time it will fail because you already added the configuration element. I ended up creating a second cloud project with a separate csdef file, so that my devfabric would not run that startup script. There are probably other solutions though.
Update
My suggestion for separate projects in the previous paragraph is not really a good idea. Non-idempotent startup tasks are a very bad idea, because some day the Azure fabric will decide to restart your roles for you, the startup task will fail, and it'll go into a recycle loop. Most likely in the middle of the night. Instead, make your startup tasks idempotent as discussed on this SO thread.
To deal with local development fabric having issues after first deploy, I added the appropriate commands to the CMD file to reset config. In addition, I'm setting compression level here specifically, since it appears to default to zero in some (all?) cases.
REM Remove old settings - keeps local deploys working (since you get errors otherwise)
%windir%\system32\inetsrv\appcmd reset config -section:urlCompression
%windir%\system32\inetsrv\appcmd reset config -section:system.webServer/httpCompression
REM urlCompression - is this needed?
%windir%\system32\inetsrv\appcmd set config -section:urlCompression /doDynamicCompression:True /commit:apphost
REM Enable json mime type
%windir%\system32\inetsrv\appcmd set config -section:system.webServer/httpCompression /+"dynamicTypes.[mimeType='application/json; charset=utf-8',enabled='True']" /commit:apphost
REM IIS Defaults
%windir%\system32\inetsrv\appcmd set config -section:system.webServer/httpCompression /+"dynamicTypes.[mimeType='text/*',enabled='True']" /commit:apphost
%windir%\system32\inetsrv\appcmd set config -section:system.webServer/httpCompression /+"dynamicTypes.[mimeType='message/*',enabled='True']" /commit:apphost
%windir%\system32\inetsrv\appcmd set config -section:system.webServer/httpCompression /+"dynamicTypes.[mimeType='application/x-javascript',enabled='True']" /commit:apphost
%windir%\system32\inetsrv\appcmd set config -section:system.webServer/httpCompression /+"dynamicTypes.[mimeType='*/*',enabled='False']" /commit:apphost
%windir%\system32\inetsrv\appcmd set config -section:system.webServer/httpCompression /+"staticTypes.[mimeType='text/*',enabled='True']" /commit:apphost
%windir%\system32\inetsrv\appcmd set config -section:system.webServer/httpCompression /+"staticTypes.[mimeType='message/*',enabled='True']" /commit:apphost
%windir%\system32\inetsrv\appcmd set config -section:system.webServer/httpCompression /+"staticTypes.[mimeType='application/javascript',enabled='True']" /commit:apphost
%windir%\system32\inetsrv\appcmd set config -section:system.webServer/httpCompression /+"staticTypes.[mimeType='*/*',enabled='False']" /commit:apphost
REM Set dynamic compression level to appropriate level. Note gzip will already be present because of reset above, but compression level will be zero after reset.
%windir%\system32\inetsrv\appcmd.exe set config -section:system.webServer/httpCompression /+"[name='deflate',doStaticCompression='True',doDynamicCompression='True',dynamicCompressionLevel='7',dll='%%Windir%%\system32\inetsrv\gzip.dll']" /commit:apphost
%windir%\system32\inetsrv\appcmd.exe set config -section:system.webServer/httpCompression -[name='gzip'].dynamicCompressionLevel:7 /commit:apphost
This article from MS is their how to script for JSON http://msdn.microsoft.com/en-us/library/windowsazure/hh974418.aspx.
It deals with many of the issues mentioned e.g. being able to handle Azure recycle etc
Just had an issue with this regarding the error type 183, and I found a solution. So if anybody else is experiencing this here goes:
Here's the error I got:
User program "F:\approot\bin\EnableCompression.cmd" exited with non-zero exit code 183. Working Directory is F:\approot\bin.
And here's the code that fixed it for me:
REM *** Add a compression section to the Web.config file. ***
%windir%\system32\inetsrv\appcmd set config /section:urlCompression /doDynamicCompression:True /commit:apphost >> "%TEMP%\StartupLog.txt" 2>&1
REM ERRORLEVEL 183 occurs when trying to add a section that already exists. This error is expected if this
REM batch file were executed twice. This can occur and must be accounted for in a Windows Azure startup
REM task. To handle this situation, set the ERRORLEVEL to zero by using the Verify command. The Verify
REM command will safely set the ERRORLEVEL to zero.
IF %ERRORLEVEL% EQU 183 DO VERIFY > NUL
REM If the ERRORLEVEL is not zero at this point, some other error occurred.
IF %ERRORLEVEL% NEQ 0 (
ECHO Error adding a compression section to the Web.config file. >> "%TEMP%\StartupLog.txt" 2>&1
GOTO ErrorExit
)
REM *** Add compression for json. ***
%windir%\system32\inetsrv\appcmd set config -section:system.webServer/httpCompression /+"dynamicTypes.[mimeType='application/json; charset=utf-8',enabled='True']" /commit:apphost >> "%TEMP%\StartupLog.txt" 2>&1
IF %ERRORLEVEL% EQU 183 VERIFY > NUL
IF %ERRORLEVEL% NEQ 0 (
ECHO Error adding the JSON compression type to the Web.config file. >> "%TEMP%\StartupLog.txt" 2>&1
GOTO ErrorExit
)
REM *** Exit batch file. ***
EXIT /b 0
REM *** Log error and exit ***
:ErrorExit
REM Report the date, time, and ERRORLEVEL of the error.
DATE /T >> "%TEMP%\StartupLog.txt" 2>&1
TIME /T >> "%TEMP%\StartupLog.txt" 2>&1
ECHO An error occurred during startup. ERRORLEVEL = %ERRORLEVEL% >> "%TEMP%\StartupLog.txt" 2>&1
EXIT %ERRORLEVEL%
Solution found at http://msdn.microsoft.com/en-us/library/azure/hh974418.aspx
Yes, you can choose the OS you want, but by default, you'll get the latest.
Compression is tricky. There are lots of things that can go wrong. Are you by chance doing this testing behind a proxy server? I believe IIS by default doesn't send compressed content to proxies. I found a handy tool to test whether compression is working when I was playing with this: http://www.whatsmyip.org/http_compression/.
It looks like you have doDynamicCompression="false"... is that just a typo? You want that to be on if you're going to get compression on JSON you return from a web service.