Editing php.ini to allow certain features - json

Okay, so I've been testing and building a website on my computer through localhost. Everything works fine on my computer! I wanted to then upload it to my godaddy hosting account. Then I get an error. I am using json_decode as an argument for one of for each loops in my php. When I'm running my site through a hosting provider it tells me there is an invalid argument in the foreach() loop on like 43. So, I knew it had to do with my php.ini file, so I copied the one from my computer and pasted it in the php.ini file on godaddy, for my site. Then the foreach() loop worked! But, then all kinds of hell broke loose. Session problems and such. So, my question is, what do I need to add to make json_decode work?
Thanks
Here is my php.ini file with the hosting provider:
register_globals = off
allow_url_fopen = off
expose_php = Off
max_input_time = 60
variables_order = "EGPCS"
extension_dir = ./
extension=json.so
upload_tmp_dir = /tmp
precision = 12
SMTP = relay-hosting.secureserver.net
url_rewriter.tags = "a=href,area=href,frame=src,input=src,form=,fieldset="
; Only uncomment zend optimizer lines if your application requires Zend Optimizer
support
;[Zend]
;zend_optimizer.optimization_level=15
;zend_extension_manager.optimizer=/usr/local/Zend/lib/Optimizer-3.3.3
;zend_extension_manager.optimizer_ts=/usr/local/Zend/lib/Optimizer_TS-3.3.3
;zend_extension=/usr/local/Zend/lib/Optimizer-3.3.3/ZendExtensionManager.so
;zend_extension_ts=/usr/local/Zend/lib/Optimizer_TS-3.3.3/ZendExtensionManager_TS.so
; -- Be very careful to not to disable a function which might be needed!
; -- Uncomment the following lines to increase the security of your PHP site.
;disable_functions = "highlight_file,ini_alter,ini_restore,openlog,passthru,
; phpinfo, exec, system, dl, fsockopen, set_time_limit,
;

You cant just replace the php.ini file because it has paths hard coded.
For example, with your session error, most likely the setting session.save_path is referencing a directory that doesn't exist or incorrect permissions.
Can you post the line of code that was on line 43? I am guessing that your local php.ini doesn't display the error whereas the godaddy config does.

Related

How to configure Xdebug for JetBrains PhpStorm 2020.1?

So, I was quite happily debugging my PHP code with PhpStorm - until Windows became severely corrupted ... and my backup regime turned out to not quite as good as I had thought (let that be a lesson to many of us :-( )
Here's the relevant part of my php.ini:
[PHP]
[Xdebug]
; ---- trying to follow PHP storm's advice
zend_extension = "e:\coding\Web_development\php\php\ext\php_xdebug-3.0.1-7.3-vc15-x86_64.dll"
xdebug.remote_enable = 1
xdebug.remote_handler = dbgp
xdebug.remote_host = 127.0.0.1
;xdebug.remote_port = 9000
;xdebug.remote_mode = req
xdebug.idekey="xdebug"
; ---------- previously worked
;xdebug.remote_enable=1
;xdebug.remote_host=127.0.0.1
;xdebug.remote_port=9000
;xdebug.remote_autostart=1
;xdebug.remote_handler=dbgp
;xdebug.idekey="xdebug"
;xdebug.remote_log=m:\xdebug.log
;xdebug.profiler_enable=0
;xdebug.profiler_enable_trigger=0
;;xdebug.profiler_output_dir="F:\DropBox\programs\xampp\htdocs\_PHP_profile"
;xdebug.profiler_output_name=cachegrind.out.%s.%t
And, here's what PhpStorm says :
BUT much of that does not actually exist at https://xdebug.org/docs/all_settings - as if some of those settings are no longer relevant/supported.
So, can anyone post the relevant [Xdebug] portion of php.ini for PHP storm 2020.1 ?
The upgrade that's catching you out here is not PhpStorm, it's XDebug: XDebug 3.0 came out a couple of weeks ago, and has completely overhauled the settings. As mentioned in one of the messages in your screenshot there is an upgrade guide on the XDebug site
It looks like PhpStorm's checking script isn't fully updated yet, so it's recommending a confusing mixture of old and new settings.
The most important changes are:
The new xdebug.mode setting toggles a whole bunch of settings at once rather than having to remember the right combination. Some settings are simply no longer needed because of this.
The default port is now 9003 instead of 9000, because of some other popular software using the same port.
A lot of remaining settings have been renamed to be clearer.
Looking down your old config:
zend_extension = "e:\coding\Web_development\php\php\ext\php_xdebug-3.0.1-7.3-vc15-x86_64.dll"
; this tells PHP to load the XDebug extension
; note that the file name includes the version number, confirming that you're using v3
xdebug.remote_enable=1
; now implied by xdebug.mode=debug
xdebug.remote_host=127.0.0.1
; renamed xdebug.client_host
xdebug.remote_port=9000
; renamed xdebug.client_port
; also, the default is now 9003 not 9000
; so either set to 9000 here, or tell PhpStorm to use port 9003
xdebug.remote_autostart=1
; replaced with xdebug.start_with_request=yes
xdebug.remote_handler=dbgp
; no longer needed, as there was only one valid value
xdebug.idekey="xdebug"
; still supported, but not usually needed
xdebug.remote_log=m:\xdebug.log
; replaced by xdebug.log
xdebug.profiler_enable=0
; now implied by xdebug.mode=debug
xdebug.profiler_enable_trigger=0
; now implied by xdebug.mode=debug
xdebug.profiler_output_dir="F:\DropBox\programs\xampp\htdocs\_PHP_profile"
; not needed for debugging
xdebug.profiler_output_name=cachegrind.out.%s.%t
; not needed for debugging
So your new config should I believe look like this:
zend_extension = "e:\coding\Web_development\php\php\ext\php_xdebug-3.0.1-7.3-vc15-x86_64.dll"
xdebug.mode=debug
xdebug.client_host=127.0.0.1
xdebug.client_port=9000 ; or 9003, but should match the setting in PhpStorm
xdebug.start_with_request=yes
xdebug.idekey="xdebug"
xdebug.log=m:\xdebug.log

How to connect to local MySQL Server 8.0 with DBIish in Perl6

I'm working on a Perl6 project, but having difficulty connecting to MySQL. Even when using the DBIish (or perl6.org tutorial) example code, the connection fails. Any suggestions or advice is appreciated! User credentials have been confirmed accurate too.
I'm running this on Windows 10 with MySQL Server 8.0 and standard Perl6 with Rakudo Star. I have tried modifying the connection string in numerous ways like :$password :password<> :password() etc. but can't get a connection established. Also should note that I have the ODBC, C, C++, and.Net connectors installed.
#!/usr/bin/perl6
use v6.c;
use lib 'lib';
use DBIish;
use Register::User;
# Windows support
%*ENV<DBIISH_MYSQL_LIB> = "C:/Program Files/MySQL/MySQL Server 8.0/liblibmysql.dll"
if $*DISTRO.is-win;
my $dbh = DBIish.connect('mysql', :host<localhost>, :port(3306), :database<dbNameHere>, :user<usernameHere>, :password<pwdIsHere>) or die "couldn't connect to database";
my $sth = $dbh.prepare(q:to/STATEMENT/);
SELECT *
FROM users
STATEMENT
$sth.execute();
my #rows = $sth.allrows();
for #rows { .print }
say #rows.elems;
$sth.finish;
$dbh.dispose;
This should be connecting to the DB. Then the app runs a query, followed by printing out each resulting row. What actually happens is the application hits the 'die' message every time.
This is more of a work around, but being unable to use use a DB is crippling. So even when trying to use the NativeLibs I couldn't get a connection via DBIish. Instead I have opted to using DB::MySQL which is proving to be quite helpful. With a few lines of code this module has your DB needs covered:
use DB::MySQL;
my $mysql = DB::MySQL.new(:database<databaseName>, :user<userName>, :password<passwordHere>);
my #users = $mysql.query('select * from users').arrays;
for #users { say "user #$_[0]: $_[1] $_[2]"; }
#Results would be:
#user #1: FirstName LastName
#user #2: FirstName LastName
#etc...
This will print out a line for each user formatted as shown above. It's not as familiar as DBIish, but this module gets the job done as needed. There's plenty more you can do with it to, so I highly recommend reading the docs.
According to this github DBIish issue 127
The environmental variable DBIISH_MYSQL_LIB was removed. I don't know if anyone brought it back.
However if you add the library's path, and the file is named mysql.dll, it will work. Not a good result for the scientific method.
So more testing is needed - and perhaps
C:\Program Files\MySQL\MySQL Server 8.0\lib>mklink mysql.dll .\libmysql.dll
Oviously you can create your own lib directory and add that to your path and then add this symlink to that directory.
Hope this helps. I've spent hours..
EDIT: Still spending time - accounting later.
Something very transitory is going on. I reset the machine (perhaps always do this from now on), and still got the missing mysql.dll errors. Tried going into the MySQL lib directory to execute raku from there.. worked. changed directories.. didn't work.
Launched administrator cmd - from home directory, tried the raku command. Worked. Ok - not good, but perhaps consistent. Launched non admin cmd, tried it from the MySQL lib directory, worked. And just for giggles, tried it outside of that directory.. worked.
Now I can't get it not to work. Will explore NativeLibs::Searcher as Valle Lukas suggested!
Maybe the example in the dbiish repository is not valid anymore.
The DBIISH_MYSQL_LIB Env seems to be replaced by NativeLibs::Searcher with commit 9bc4191
Looking at NativeLibs::Searcher may help to find the root cause of the problem.

MySQL login-path issues with clustercheck script used in xinetd

default: on
# description: mysqlchk
service mysqlchk
{
# this is a config for xinetd, place it in /etc/xinetd.d/
disable = no
flags = REUSE
socket_type = stream
type = UNLISTED
port = 9200
wait = no
user = root
server = /usr/bin/mysqlclustercheck
log_on_failure += USERID
only_from = 0.0.0.0/0
#
# Passing arguments to clustercheck
# <user> <pass> <available_when_donor=0|1> <log_file> <available_when_readonly=0|1> <defaults_extra_file>"
# Recommended: server_args = user pass 1 /var/log/log-file 0 /etc/my.cnf.local"
# Compatibility: server_args = user pass 1 /var/log/log-file 1 /etc/my.cnf.local"
# 55-to-56 upgrade: server_args = user pass 1 /var/log/log-file 0 /etc/my.cnf.extra"
#
# recommended to put the IPs that need
# to connect exclusively (security purposes)
per_source = UNLIMITED
}
/etc/xinetd.d #
It is kind of strange that script works fine when run manually when it runs using /etc/xinetd.d/ , it is not working as expected.
In mysqlclustercheck script, instead of using --user= and passord= syntax, I am using --login-path= syntax
script runs fine when I run using command line but status for xinetd was showing signal 13. After debugging, I have found that even simple command like this is not working
mysql_config_editor print --all >>/tmp/test.txt
We don't see any output generated when it is run using xinetd ( mysqlclustercheck)
Have you tried the following instead of /usr/bin/mysqlclustercheck?
server = /usr/bin/clustercheck
I am wondering if you could test your binary location with the linux which command.
A long time ago since this question was asked, but it just came to my attention.
First of all as mentioned, Percona Cluster Control script is called clustercheck, so make sure you are using the correct name and correct path.
Secondly, since the server script runs fine from command line, it seems to me that the path of mysql client command is not known by the xinetd when it runs the Cluster Control script.
Since the mysqlclustercheck script as it is offered from Percona, it uses only the binary name mysql without specifying the absolute path I suggest you do the following:
Find where mysql client command is located on your system:
ccloud#gal1:~> sudo -i
gal1:~ # which mysql
/usr/local/mysql/bin/mysql
gal1:~ #
then edit script /usr/bin/mysqlclustercheck and in the following line:
MYSQL_CMDLINE="mysql --defaults-extra-file=$DEFAULTS_EXTRA_FILE -nNE --connect-timeout=$TIMEOUT \
place the exact path of mysql client command you found in the previous step.
I also see that you are not using MySQL connection credentials for connecting to MySQL server. mysqlclustercheck script as it is offered from Percona, it uses User/Password in order to connect to MySQL server.
So normally, you should execute the script in the command line like:
gal1:~ # /usr/sbin/clustercheck haproxy haproxyMySQLpass
HTTP/1.1 200 OK
Content-Type: text/plain
Where haproxy/haproxyMySQLpass is the MySQL connection user/pass for HAProxy monitoring user.
Additionally, you should specify them to your script's xinetd settings like:
server = /usr/bin/mysqlclustercheck
server_args = haproxy haproxyMySQLpass
Last but not least, the signal 13 you are getting is because you try to write something in a script run by xinetd. If for example in your mysqlclustercheck you try to add a statement like
echo "debug message"
you probably going to see the broken pipe signal (13 in POSIX).
Finally, I had issues with this script using SLES 12.3 and I finally manage to run it not as 'nobody' but as 'root'.
Hope it helps

Request Entity Too Large

I get this message,
Request Entity Too Large
The requested resource
/index.php
does not allow request data with POST requests, or the amount of data provided in the request exceeds the capacity limit.
I set
php_value post_max_size 50M
php_value upload_max_filesize 50M
in .htaccess but not helped
How to overcome this?
Thanks
After you are over the raising of PHP's memory_limit, post_max_size and upload_max_filesize, I would like to recommend you some articles related to the topic, maybe one of them solves the problem.
I found this post on Server Fault:
https://serverfault.com/questions/79741/php-apache-post-limit/79745#79745
sybreon suggests to double-check the Content-Length, and - citing - "ensure that you are directly connecting to Apache and not through either a proxy or a reverse-proxy. Some reverse-proxies place a cap on the maximum size of a request as a sort of security measure. So, you may want to check that as well as your Apache logs to ensure that nothing else is going on."
sybreon also posted this link: Apache 413 error problems.
The following is only applicable if you have mod_ssl module turned on in Apache. (Otherwise this setting can cause a server crash.)
Citing the article:
"I was using Apache SSL client certificates, which have a limit of 128K, and if re-negotiation has to happen, a larger POST will fail.
This Bugzilla posting had the clues - You have to set the following as DEFAULTS for your SSL server, not just the directory.
SSLVerifyClient require
Otherwise it forces a renegotiation of some sort, and fails with a 413 error."
The previous article also mentioned the LimitRequestBody directive.
A guy says here that the appropriate setting of this directive solved his problem..
I hope one of these settings solves this problem!
The only thing that would work for me was to tune up the SSL Buffer Size. You can set this by...
<Directory /my/blah/blah>
...
# Set this to something big...
SSLRenegBufferSize 10486000
...
</Directory>
...and then just restart Apache for the change to take effect. (Found this at: http://forum.joomla.org/viewtopic.php?p=2085574)
You can also use "Location /" to simply apply the setting to a whole VirtualHost:
<VirtualHost *:443>
# ...
<Location />
SSLRenegBufferSize 101048600
</Location>
# ...
</VirtualHost>
My server is Apache. It was mod_security module which was preventing post of large data approximately 171 KB.
I did below configurations in mod_security.conf
SecRequestBodyNoFilesLimit 10486000
SecRequestBodyInMemoryLimit 10486000
If max_post_upload and max_file_upload in PHP has been set,
and there is a setting in Apache2.conf or ModSec config files of LimitRequestBody set high enough
then possibly a .htaccess file will work.
Go to the directory with the upload php file in it ( the file or page throwing the error.)
2 . Make or edit .htaccess
3 . Edit or create a line with
LimitRequestBody 20971520 in it.
Save the .htaccess. Set permissions. ( 644 and apache owner)
Possibly restart apache.
Tada . Hopefully fixed.
This setting sets that limit for this folder only - which is one way to avoid a global setting in php and apache which makes you open to large packet / load DOS attacks.
LimitRequestBody 0 gives you unlimited uploads.
I was struggling with this 413 - Request entity too large problem for last day or so, as I was trying to upload farely large (in MBs) images to the server.
My setup is apache (227) proxying requests to jboss eap (6.4.20) server for accessing rest endpoints.
2 Things worked for me.
Make SSLVerifyClient required at the virtual host level. This means all the resources need a valid client cert presented to be served. This was not an option for me as all the resources except /api should NOT be mutual auth protected. So, while it worked, this was not an option for me.
I removed the global level SSLVerifyClient required and kept it 'optional'. I re enabled required option only on <Location /api>...</Location>. Trick was to have the SSL renegotiation happen only after a certain threshold is reached - which would be our desired upload file size.
So, finally it turned out that I had to enable 'SSLRenegBufferSize' setting on a specific LocationMatch as follows:
<LocationMatch ^/api/v1/path/(.*)/to/(.*)/resource/endpoint$>
SSLRenegBufferSize 5242880 #allow upto 5MB for files to come through
</LocationMatch>
(.*) in the case above represents my path params in the endpoint. Hope this helps.
After raising of PHP's memory_limit, post_max_size and upload_max_filesize in php.ini, I still had the problem.
What was also needed was the following in apache2.conf:
LimitRequestBody 1000000000
That's for a max size of 1GB.
The docs say that 0 is the default, which means unlimited. However, until I set the directive, I couldn't upload large files.
Don't forget to restart apache2.

How to enable gzip HTTP compression on Windows Azure dynamic content

I've been trying unsuccessfully to enable gzip HTTP compression on my Windows Azure hosted WCF Restful service which returns JSON only from GET and POST requests.
I have tried so many things that I would have a hard time listing all of them, and I now realise I have been working with conflicting information (regarding old version of azure etc) so think it best to start with a clean slate!
I am working with Visual Studio 2008, using the February 2010 tools for Visual Studio.
So, according to the following link..
.. HTTP compression has now been enabled. I've used the advice at the following page (the URL compression advice only)..
http://blog.smarx.com/posts/iis-compression-in-windows-azure
<urlCompression doStaticCompression="true"
doDynamicCompression="true"
dynamicCompressionBeforeCache="true"
/>
.. but I get no compression. It doesn't help that I don't know what the difference is between urlCompression and httpCompression. I've tried to find out but to no avail!
Could, the fact that the tools for Visual Studio were released before the version of Azure which supports compression, be a problem? I have read somewhere that, with the latest tools, you can choose which version of Azure OS you want to use when you publish ... but I don't know if that's true, and if it is, I can't find where to choose. Could I be using a pre-http enabled version?
I've also tried blowery http compression module, but no results.
Does any one have any up-to-date advice on how to achieve this? i.e. advice that relates to the current version of the Azure OS.
Cheers!
Steven
Update: I edited the above code to fix a type in the web.config snippet.
Update 2: Testing the responses using the whatsmyip URL shown in the answer below is showing that my JSON responses from my service.svc are being returned without any compression, but static HTML pages ARE being returned with gzip compression. Any advice on how to get the JSON responses to compress will be gratefully received!
Update 3: Tried a JSON response larger than 256KB to see if the problem was due to the JSON response being smaller than this as mentioned in comments below. Unfortunately the response is still un-compressed.
Well it took a very long time ... but I have finally solved this, and I want to post the answer for anyone else who is struggling. The solution is very simple and I've verified that it does definitely work!!
Edit your ServiceDefinition.csdef file to contain this in the WebRole tag:
<Startup>
<Task commandLine="EnableCompression.cmd" executionContext="elevated" taskType="simple"></Task>
</Startup>
In your web-role, create a text file and save it as "EnableCompression.cmd"
EnableCompression.cmd should contain this:
%windir%\system32\inetsrv\appcmd set config /section:urlCompression /doDynamicCompression:True /commit:apphost
%windir%\system32\inetsrv\appcmd set config -section:system.webServer/httpCompression /+"dynamicTypes.[mimeType='application/json; charset=utf-8',enabled='True']" /commit:apphost
.. and that's it! Done! This enables dynamic compression for the json returned by the web-role, which I think I read somewhere has a rather odd mime type, so make sure you copy the code exactly.
Well at least I'm not alone on this one - and it's still a stupid PITA almost a year later.
The problem is a MIME type mismatch. WCF returns JSON response with Content-Type: application/json; charset=UTF-8. The default IIS configuration, about halfway down that page, does not include that as a compressible MIME type.
Now, it might be tempting to add an <httpCompression> section to your web.config, and add application/json to that. But that's just a bad way to waste a good hour or two - you can only change the <httpCompression> element at the applicationHost.config level.
So there are two possible solutions. First, you could change your WCF response to use a MIME type that is compressible in the default configuration. text/json will work so adding this to your service method(s) will give you dynamic compression: WebOperationContext.Current.OutgoingResponse.ContentType = "text/json";
Alternatively, you could change the applicationHost.config file using appcmd and a startup task. This is discussed (among other things) on this thread. Note that if you add that startup task and run it in the dev fabric, it will work once. The second time it will fail because you already added the configuration element. I ended up creating a second cloud project with a separate csdef file, so that my devfabric would not run that startup script. There are probably other solutions though.
Update
My suggestion for separate projects in the previous paragraph is not really a good idea. Non-idempotent startup tasks are a very bad idea, because some day the Azure fabric will decide to restart your roles for you, the startup task will fail, and it'll go into a recycle loop. Most likely in the middle of the night. Instead, make your startup tasks idempotent as discussed on this SO thread.
To deal with local development fabric having issues after first deploy, I added the appropriate commands to the CMD file to reset config. In addition, I'm setting compression level here specifically, since it appears to default to zero in some (all?) cases.
REM Remove old settings - keeps local deploys working (since you get errors otherwise)
%windir%\system32\inetsrv\appcmd reset config -section:urlCompression
%windir%\system32\inetsrv\appcmd reset config -section:system.webServer/httpCompression
REM urlCompression - is this needed?
%windir%\system32\inetsrv\appcmd set config -section:urlCompression /doDynamicCompression:True /commit:apphost
REM Enable json mime type
%windir%\system32\inetsrv\appcmd set config -section:system.webServer/httpCompression /+"dynamicTypes.[mimeType='application/json; charset=utf-8',enabled='True']" /commit:apphost
REM IIS Defaults
%windir%\system32\inetsrv\appcmd set config -section:system.webServer/httpCompression /+"dynamicTypes.[mimeType='text/*',enabled='True']" /commit:apphost
%windir%\system32\inetsrv\appcmd set config -section:system.webServer/httpCompression /+"dynamicTypes.[mimeType='message/*',enabled='True']" /commit:apphost
%windir%\system32\inetsrv\appcmd set config -section:system.webServer/httpCompression /+"dynamicTypes.[mimeType='application/x-javascript',enabled='True']" /commit:apphost
%windir%\system32\inetsrv\appcmd set config -section:system.webServer/httpCompression /+"dynamicTypes.[mimeType='*/*',enabled='False']" /commit:apphost
%windir%\system32\inetsrv\appcmd set config -section:system.webServer/httpCompression /+"staticTypes.[mimeType='text/*',enabled='True']" /commit:apphost
%windir%\system32\inetsrv\appcmd set config -section:system.webServer/httpCompression /+"staticTypes.[mimeType='message/*',enabled='True']" /commit:apphost
%windir%\system32\inetsrv\appcmd set config -section:system.webServer/httpCompression /+"staticTypes.[mimeType='application/javascript',enabled='True']" /commit:apphost
%windir%\system32\inetsrv\appcmd set config -section:system.webServer/httpCompression /+"staticTypes.[mimeType='*/*',enabled='False']" /commit:apphost
REM Set dynamic compression level to appropriate level. Note gzip will already be present because of reset above, but compression level will be zero after reset.
%windir%\system32\inetsrv\appcmd.exe set config -section:system.webServer/httpCompression /+"[name='deflate',doStaticCompression='True',doDynamicCompression='True',dynamicCompressionLevel='7',dll='%%Windir%%\system32\inetsrv\gzip.dll']" /commit:apphost
%windir%\system32\inetsrv\appcmd.exe set config -section:system.webServer/httpCompression -[name='gzip'].dynamicCompressionLevel:7 /commit:apphost
This article from MS is their how to script for JSON http://msdn.microsoft.com/en-us/library/windowsazure/hh974418.aspx.
It deals with many of the issues mentioned e.g. being able to handle Azure recycle etc
Just had an issue with this regarding the error type 183, and I found a solution. So if anybody else is experiencing this here goes:
Here's the error I got:
User program "F:\approot\bin\EnableCompression.cmd" exited with non-zero exit code 183. Working Directory is F:\approot\bin.
And here's the code that fixed it for me:
REM *** Add a compression section to the Web.config file. ***
%windir%\system32\inetsrv\appcmd set config /section:urlCompression /doDynamicCompression:True /commit:apphost >> "%TEMP%\StartupLog.txt" 2>&1
REM ERRORLEVEL 183 occurs when trying to add a section that already exists. This error is expected if this
REM batch file were executed twice. This can occur and must be accounted for in a Windows Azure startup
REM task. To handle this situation, set the ERRORLEVEL to zero by using the Verify command. The Verify
REM command will safely set the ERRORLEVEL to zero.
IF %ERRORLEVEL% EQU 183 DO VERIFY > NUL
REM If the ERRORLEVEL is not zero at this point, some other error occurred.
IF %ERRORLEVEL% NEQ 0 (
ECHO Error adding a compression section to the Web.config file. >> "%TEMP%\StartupLog.txt" 2>&1
GOTO ErrorExit
)
REM *** Add compression for json. ***
%windir%\system32\inetsrv\appcmd set config -section:system.webServer/httpCompression /+"dynamicTypes.[mimeType='application/json; charset=utf-8',enabled='True']" /commit:apphost >> "%TEMP%\StartupLog.txt" 2>&1
IF %ERRORLEVEL% EQU 183 VERIFY > NUL
IF %ERRORLEVEL% NEQ 0 (
ECHO Error adding the JSON compression type to the Web.config file. >> "%TEMP%\StartupLog.txt" 2>&1
GOTO ErrorExit
)
REM *** Exit batch file. ***
EXIT /b 0
REM *** Log error and exit ***
:ErrorExit
REM Report the date, time, and ERRORLEVEL of the error.
DATE /T >> "%TEMP%\StartupLog.txt" 2>&1
TIME /T >> "%TEMP%\StartupLog.txt" 2>&1
ECHO An error occurred during startup. ERRORLEVEL = %ERRORLEVEL% >> "%TEMP%\StartupLog.txt" 2>&1
EXIT %ERRORLEVEL%
Solution found at http://msdn.microsoft.com/en-us/library/azure/hh974418.aspx
Yes, you can choose the OS you want, but by default, you'll get the latest.
Compression is tricky. There are lots of things that can go wrong. Are you by chance doing this testing behind a proxy server? I believe IIS by default doesn't send compressed content to proxies. I found a handy tool to test whether compression is working when I was playing with this: http://www.whatsmyip.org/http_compression/.
It looks like you have doDynamicCompression="false"... is that just a typo? You want that to be on if you're going to get compression on JSON you return from a web service.