I want to update an extension that I have developed which has been installed by policy because I do not have it in the extensions market.
This is the content of my updates.xml file which is accessible through the link in my extension's manifest.json.
<?xml version='1.0' encoding='UTF-8'?>
<gupdate xmlns='http://www.google.com/update2/response' protocol='2.0'>
<app appid='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'>
<updatecheck codebase='https://myserver.com/xxxxxxxxxxxxxxx/download' version='0.8' />
</app>
</gupdate>
On previous occasions, by modifying the updates.xml file, the extension was updated without problems, but now it seems to be not enough.
I have read that now Google Chrome does not support updates outside the extension market by default. What do I have to do to update my extension?
Related
We have an external extension hosting service setup to serve XML and a crx file packed with chrome.
The XML is formatted with:
<gupdate xmlns="http://www.google.com/update2/response" protocol="1.0">
<app appid="xxxxxxxxxxx">
<updatecheck codebase="https://extensions.site.com/crx/build.crx" version="1.9.6"/>
</app>
</gupdate>
We edit group policy to point at the xml file (we've also tried the crx file with and without an update URL) using the Google Admin Console to target Chrome OS devices.
We configure using the built in setting to deploy to Chrome OS.
We also have appropriate settings regarding allowing external extensions.
The extension refuses to install on Chrome OS enterprise (education).
This worked in VS2010 and VS2012. But in VS2013 application (by pressing "Run" or F5) is just starts with my user's rights and cannot access some resources (I'm using HttpListener).
<trustInfo xmlns="urn:schemas-microsoft-com:asm.v2">
<security>
<requestedPrivileges xmlns="urn:schemas-microsoft-com:asm.v3">
<requestedExecutionLevel level="requireAdministrator" uiAccess="false" />
</requestedPrivileges>
</security>
</trustInfo>
I tried to google, tried to generate new manifest, copied it's content from MSDN, but nothing helped. Did something changed in this part of VS2013?
Update1:
That was a part. Here is complete manifest content:
<?xml version="1.0" encoding="utf-8"?>
<asmv1:assembly manifestVersion="1.0" xmlns="urn:schemas-microsoft-com:asm.v1" xmlns:asmv1="urn:schemas-microsoft-com:asm.v1" xmlns:asmv2="urn:schemas-microsoft-com:asm.v2" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<assemblyIdentity version="1.0.0.0" name="MyApplication.app"/>
<trustInfo xmlns="urn:schemas-microsoft-com:asm.v2">
<security>
<requestedPrivileges xmlns="urn:schemas-microsoft-com:asm.v3">
<requestedExecutionLevel level="requireAdministrator" uiAccess="false" />
</requestedPrivileges>
</security>
</trustInfo>
<compatibility xmlns="urn:schemas-microsoft-com:compatibility.v1">
<application></application>
</compatibility>
</asmv1:assembly>
Update2:
Okey here is simple example: when I run compiled .exe file UAC asks for admin privileges. But when I run it from VS2013 (by pressing "Run" or F5) it doesn't! And if you open the same project with VS2012/VS2010, they do ask to restart under admin.
You can check this quickly:
Create console application in VS2013, add manifest and set level="requireAdministrator". Then run or press F5 (VS2013 runs the application under admin when press Ctrl-F5).
But this is not the behavior of VS2012/VS2010!
How can we get the old behavior?
Update3:
Please vote here or inform me about another ticket.
You need to disable the hosting process option to get the VS restart prompt. Project + Properties, Debug tab, untick the "Enable the Visual Studio hosting process" checkbox. It can be easier to just start VS elevated right away. Right-click the shortcut, Run as Administrator.
Not entirely sure if this is a bug or a feature. Keep your eye on this Connect report to learn more.
Update: looks like a bug, the feedback report was closed as "fixed". Unfortunately it gives no hint when that fix is going to make it our machines. Maybe a future VS2013 update, surely the next version.
Update2: the fix made it into VS2013 Update 3.
What I ended up doing is I run the project without debugging CRTL+F5 . The it gives me the same prompt that Visual Studio 2010 gives you.
I'm hoping this will get fixed soon™
In the mean time you can use handy shortcuts for restarting VS in admin mode, look up "Visual Studio Restart" in the extension gallery.
Edit:
Only way I see you can achieve the old behavior is to turn off VS hosting process as it is this process that for some reason "eats" the elevation prompt. Actually when I think about it, this behavior might even be by design. You can turn off hosting process in project properties (Debug) or when you generate .csproj set platform configuration UseVSHostingProcess tag to false, like so:
<PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Debug|AnyCPU' ">
<PlatformTarget>AnyCPU</PlatformTarget>
<DebugSymbols>true</DebugSymbols>
<DebugType>full</DebugType>
<Optimize>false</Optimize>
<OutputPath>bin\Debug\</OutputPath>
<DefineConstants>DEBUG;TRACE</DefineConstants>
<ErrorReport>prompt</ErrorReport>
<WarningLevel>4</WarningLevel>
<UseVSHostingProcess>false</UseVSHostingProcess>
</PropertyGroup>
UPDATE: Solved one part, but not other
I have the CRX updating now (it was not rebuilding).
However, Chrome will not accept the XML or CRX at an https URL.
I believe #2 is because it's a self-signed certificate. Does anyone know if there's a way around this? (This is purely for development, so hosted internally)
ORIGINAL POST:
I created a packaged extension that is hosted on my internal website, but is added to Chrome via dragging it from the desktop (because Chrome won't allow installing packaged extensions via external websites - see here: After adding ExtensionInstallSources preference with my URL to Chrome Preferences, still won't allow installing ".crx" packaged app ).
The manifest has the update_url set to an XML file located on my site. That XML file has the url for the crx file set under updatecheck codebase='...'. Both files exist on the website and are findable. I also upated the version number from 2.0.0.2 to 2.0.0.2 in both the XML file and the manifest.json. I also made a change in the index.html file of the extension.
I checked the appid and it is the same in the XML file and in Chrome.
Despite clicking the "update extensions now" button about 50 times, and waiting 10 minutes, it does not update.
NOTE: I did alias the internal ip 192.168.1.108 where the site is hosted in my hosts file as myinternal.fake but this works in both chrome and firefox so I don't think that's the issue
Update XML File (located at: https://myinternal.fake/updates/helloworld.xml)
<?xml version='1.0' encoding='UTF-8'?>
<gupdate xmlns='http://www.google.com/update2/response' protocol='2.0'>
<app appid='akchdaojnpiglpjeiamjpacbkppcgbgj'>
<updatecheck codebase='https://myinternal.fake/helloworld.crx' version='2.0.0.2' prodversionmin='23' />
</app>
</gupdate>
manifest.json
{
"manifest_version": 2,
"name": "Hello World",
"version": "2.0.0.2",
"minimum_chrome_version": "23",
"update_url": "https://myinternal.fake/updates/helloworld.xml",
"icons":
{
"16": "icon_16.png",
"128": "icon_128.png"
},
"app":
{
"background":
{
"scripts":
[
"main.js"
]
}
}
}
EDIT: I also checked and the header is an acceptable one for Chrome (according to this: http://developer.chrome.com/dev/extensions/hosting.html). It sends the CRX file as "text/plain" and does NOT send the header X-Content-Type-Options: nosniff so it should be valid.
Also, when I changed from https to http, now when I click "update extensions now" the extension disappears for a split second which indicates it's now reading the XML, but still not accepting the update!
The issue is with self-signed certificates and Chrome. Chrome does not accept Extension updates form self-signed certificates unless they're an "accepted" authority. These steps will make it work:
Follow these steps: https://stackoverflow.com/a/15076602/857025 to export your certificate and then import it as an authority
Close Chrome
Restart Chrome
Close extensions window if opened
Reopen via "chrome://extensions" and then click "update extensions now"
It should then update your extension located on a self-signed https connection.
UPDATE This is not a perfect solution as Chrome appears to be a bit wonky in accepting self-signed certs. It randomly stops seeing updates. If I switch back to using regular http (for the update_url and the CRX's url), updates happen every time.
I checked and my cert is still a trusted authority but Chrome suddenly stopped recognizing updates, so there must be an issue with this.
By the way Google Stopped supporting updating extensions that are hosted outside of Chrome Webstore: http://blog.chromium.org/2013/11/protecting-windows-users-from-malicious.html
I am trying to host a chrome extension on my own server. I'm having a really weird issue where every so often I will install the extension by pointing my browser at the .crx and it will install a version of the extension with a different appid and which has a codebase which dates back to a couple of weeks ago.
I suspect that I somehow have 2 extension ids in play. One which represents the current codebase and another which entered the mix some time ago.
Is there a way that I can prevent this confusion from occurring?
Longer Description
At the very beginning of my extension development process, the version number in my manifest.json was set to "1.0" for some time.
Once development started stabilizing, I reset the version number to "0.0.1" and bumped it from that point whenever I pushed changes.
Whenever I bump the version number, I package the extension and scp it to my server. The important parts of that process are below:
Packing the extension:
'/Applications/Google Chrome.app/Contents/MacOS/Google Chrome' --pack-extension=<PATH TO UNPACKED EXTENSION> --pack-extension-key=<GENERATED KEY>
The is the.pem private key that was generated by chrome the first time I packed the extension (I think, it is hard to remember since I first packed it some time ago).
Copying the .crx to the server:
scp -P <PORT> extension.crx <PATH TO SERVER>
Copying the update.xml to the server:
scp -P <PORT> update.xml <PATH TO SERVER>
The update.xml:
This is a standard update.xml file. The version number and .crx location are as expected. The only potentially interesting thing is the appid. I got this appid from the Chrome Extensions management page at one point.
<?xml version="1.0" encoding="UTF-8"?>
<gupdate xmlns="http://www.google.com/update2/response" protocol="2.0">
<app appid="cdlhmlllfilohhmmpakbcdfaabannega">
<updatecheck codebase="<EXTENSION CRX LOCATION>" version="0.0.21"/>
</app>
</gupdate>
At this point, I can ssh into my server, unpack the extension there, check the version number and read the codebase and everything will be up-to-date and as expected.
Then, I will point my browser at the <EXTENSION CRX LOCATION> and install the extension. The version number will be wrong, the appid will not match that in the update.xml and the codebase will be from weeks ago.
The extension IDs in the XML and manifest.json have to be equal.
For future readers: The extension can only be packed with the same extension ID when the same .pem is used.
For problems regarding the extensionID in the Chrome Web store, see:
• Packaging > Uploading a previously packaged extension to the Chrome Web Store
I have a directory to which a process uploads some .pdf files. This process is out of my control.
I need to make those files available through the website using Tomcat.
I have a directory /var/lib/tomcat5/webapps/test1 available to the web and I can see the files in it with a browser.
So, I created a symbolic link pointing at the directory with the .pdf files:
/var/lib/tomcat5/webapps/test1/files/, but I can't see anything in that directory.
How can I enable symlinks in the test1 directory only? I don't want to enable symlinks everywhere, just so that directory with .pdf files is available to the web.
There are a few problems with the solution of creating a META-INF/context.xml that contains <Context path="/myapp" allowLinking="true">
The biggest issue is that if a conf/context.xml exists, the allowLinking in the <Context> there takes precedence over a <Context> in a META-INF/context.xml. And if the in the conf/context.xml does not explicitly define allowLinking, that's the same as saying allowLinking="false". (see my answer to a context precedence question)
To be sure that your app allows linking, you have to say <Context override="true" allowLinking="true" ...>.
Another issue is that the path="/myapp" is ignored in a META-INF/context.xml. To prevent confusion, it's best to leave it out. The only time path in a <Context> has any effect is in the server.xml, and the official Tomcat docs recommend against putting <Context>s in a server.xml.
Finally, instead of a myapp/META-INF/context.xml file, I recommend using a conf/Catalina/localhost/myapp.xml file. This technique means you can keep the contents of your META-INF clean, which is the guts of your webapp -- I don't like to risk mucking about in the guts of my webapp. :-)
Create a context.xml file in a META-INF directory in your web app containing:
<?xml version="1.0" encoding="UTF-8"?>
<Context path="/myapp" allowLinking="true">
</Context>
more here: http://www.isocra.com/2008/01/following-symbolic-links-in-tomcat/
This works different in Tomcat 8+
http://tomcat.apache.org/migration-8.html
<Resources allowLinking="true" />
Yes I know it's an old question, but I found a new solution, using mount with the --bind option instead of a symlink, and tomcat doesn't need any reconfiguring:
cd /var/lib/tomcat5/webapps/test1/
mkdir files
mount --bind /path/to/actual/upload/directory/files files
There are 4 places where Context can live.
tomcatdir/conf/server.xml
tomcatdir/conf/context.xml
tomcatdir/conf/Catalina/localhost/appname.xml
tomcatdir/webapps/appname/META-INF/context.xml
In case of tomcat 8 allowlinking attribute should be specified not in Context but in Resources tag. My tomcatdir/conf/context.xml looks like this
<Context>
<WatchedResource>WEB-INF/web.xml</WatchedResource>
<WatchedResource>${catalina.base}/conf/web.xml</WatchedResource>
<Resources allowLinking="true" cachingAllowed="true" cacheMaxSize="100000" />
</Context>
This solution works fine for me now. But I want to share also the mistake I had done before coming to this solution.
I had defined Resources both in tomcatdir/conf/server.xml and in tomcatdir/conf/context.xml. And allowLinking="true" was set only in tomcatdir/conf/server.xml.
What I found was that if you do not specify allowLinking it is equal to setting it to false. So I removed Resources tag from server.xml and left it only tomcatdir/conf/context.xml with the allowLinking="true" attribute in it.
I made it in this other way.
I edit this other configuration file: apache-tomcat-7.0.33/conf/server.xml
In Host tag I added:
<Context path="/data" docBase="C:\datos" debug="0" reloadable="true" crossContext="false"/>
So, you can acces via: http://localhost/data
Adding following line into conf/context.xml enables softlinks for me on apache tomcat 8.5+
<Resources allowLinking="true" cachingAllowed="true" cacheMaxSize="100000">