I currently would like to send newsletters to all the people of a specific company.
For privacy and security reasons I'm forced to host all the newsletters on their own server, including the images, I can't put that content on a web server.
Because of that, all the URLS and images are network urls e.g
'file://nameOfTheServer.something.cool/newsletters/img.jpg'
However, I'm not on the same network, I send the newsletters from my office.
Because of that, I have the feeling that at the insertion of the emails, Outlook 2010 doesn't find the local urls and modifies it.
'file://nameoftheserver' becomes 'file:///\nameoftheserver', therefore, the image isn't displayed anymore once received by the people.
If I try to send the same e-mail with another e-mail adress, while being on the same network, this works, the urls aren't modified, and are still the same...
Any idea on a way I could solve it ?
Regards,
You have to change the format of URL address, it is very important to properly reference resources to be loaded from a web server, even if it is an intranet. As you mention you're using file:// but for this case you really must use http://, be aware that external links to resources (any that is not attached on same email message) can be blocked (in this case images) by email clients, antivirus software or even the email provider. Because of this, probably, people who read the email will get a warning about external content being loaded, and images won't show properly until the reader confirm to load the images from the external site (even intranet is considered an external one).
Maybe, you're wondering about some emails you received that shows images when opened and no warning is shown, It is because images are not being referenced from external sites, images are included inside email's body as attachment so it is a local reference which is considered "safe".
Related
I have a small django website that allows file uploading hosted on pythonanywhere. The files are uploaded to aws s3.
Now the problem is that even with the presence of download attribute in the html, the browser still renders the file instead of downloading it
<a download href="{{file_url}}">Download</a>
From Mozilla Developer Network Anchor element reference:
"....download only works for same-origin URLs, or the blob: and data: schemes..."
Your aws s3 links most probably are not same-origin (same domain etc) as your site.
If that is what you are running into, one work around that comes to my mind is, you can make on your site a transfer url that receives the document identifier, downloads from aws s3, and then forwards its content as response. This way you can also control content-type like headers that you may need to explicitly set to make the browser behave the way you want.
One addition, if you decide to have a solution like that, you have to take precautions to control the request content that transfer url, and only transfer content that your web site intended to. Otherwise you will have opened a vulnerability similar to what is called an "Open Redirect" vulnerability.
I have a system whenever user upload an image, it will send an email to the registered user's gmail. But in the email, i see something like this, the thumbnail is not viewable.
I inspect on the element, and found the src linked to this url:
https://ci5.googleusercontent.com/proxy/VI2cPXWhfKZEIarh-iyKNz1j9q7Ymh8ty4Yz19lXh82RjSlACBzS0aRajfIj913uXAsX2ylcLEDs5FBsj4cR9TcU75Pw5djdHx4htxdCAQxs_ue1Q1wi5TV43uLLBpigpjH1xN747mUHSRdTBJmXQWFyykInJCRXicM1KhNk=s0-d-e1-ft#https://www.somedomain.com/files/1658/thumbnail_71JtDozxS1L._SY450_.jpg
Obviously it is being cached by google proxy
But i can view the image without google user content, by accessing https://www.somedomain.com/files/1658/thumbnail_71JtDozxS1L._SY450_.jpg (i masked the domain so the image might not available to you).
I tried to clear browser cache but the problem still persist. How can i bypass the googleusercontent thingy or at least make the thumbnail able to display.
I checkout on this link Images not displayed for Gmail but im not using localhost and the image itself is accessible outside of my local network.
How does Google Image Proxy work
The Google Image Proxy is a caching proxy server. Every time an image link is included in email the request will go to the Google Image Proxy first to see if it has been cached, if so it should serve it up from the proxy or it will go fetch it and cache it there after.
The solution for most issues
The Google Image Proxy server will fetch your images if this images:
have extensions like .png, .jpg/.jpeg or .gif only. May be .webp too. But not .svg.
do not use any kind of query string part in the image URL like ?id=123
have an URL which is mapped onto the image directly.
have not a long name.
Requirements for image server:
The response from image server/proxy server must include the correct header like Content-Type: image/jpeg.
File extension and content-type header must be in the same type.
Status code in server response must be 200 instead of 403, 500 and etc.
What could help too?
Google support answer:
Set up an image URL proxy whitelist
When your users open email messages, Gmail uses Google’s secure proxy
servers to serve images that might be included in these messages. This
protects your users and domain against image-based security
vulnerabilities.
Because of the image proxy, links to images that are dependent on
internal IPs and sometimes cookies are broken. The Image URL proxy
whitelist setting lets you avoid broken links to images by creating
and maintaining a whitelist of internal URLs that'll bypass proxy
protection.
When you configure the Image URL proxy whitelist, you can specify a
set of domains and a path prefix that can be used to specify large
groups of URLs. See the guidelines below for examples.
Configure the Image URL proxy whitelist setting:
Sign in to your Google Admin console. Sign in using your administrator account (does not end in #gmail.com).
From the Admin console Home page, go to Apps > G Suite > Gmail > Advanced settings. Tip: To see Advanced settings,
scroll to the bottom of the Gmail page.
On the left, select your top-level organization.
Scroll to the Image URL proxy whitelist section.
Enter image URL proxy whitelist patterns. Matching URLs will bypass image proxy protection. See the guidelines below for more details and
instructions.
At the bottom, click Save.
It can take up to an hour for changes to propagate to user accounts.
You can track prior changes under Admin console audit log.
Guidelines for applying the Image URL proxy whitelist setting
Security considerations
Consult with your security team before configuring the Image URL proxy
whitelist setting. The decision to bypass image proxy whitelist
protection can expose your users and domain to security risks if not
used with care.
In general, if you have a domain that needs authentication via cookie,
and if that domain is controlled by an administrator within your
organization and is completely trusted, then whitelisting that URL
should not expose your domain to image-based attacks.
Important: Disabling the image proxy is not recommended. This option is available to provide flexibility for administrators, but
disabling the image proxy can leave your users vulnerable to malicious
attacks.
Entering Image URL patterns
To maintain a whitelist of internal URLs that'll bypass proxy
protection, enter the image URL patterns in the Image URL proxy
whitelist setting. Matching URLs will bypass the image proxy.
A pattern can contain the scheme, the domain, and a path. The pattern
must always have a forward slash (/) present between the domain and
path. If the URL pattern specifies a scheme, then the scheme and the
domain must fully match. Otherwise, the domain can partially match the
URL suffix. For example, the pattern google.com matches
www.google.com, but not gle.com. The URL pattern can specify a
path that's matched against the path prefix.
Important: Enter your actual domain name as you enter the image URL pattern. Always include a trailing forward slash (/) after the
domain name.
Examples of Image URL patterns
The following patterns are examples only. The following patterns:
http://rule_fixed_scheme_domain.com/
rule_flex_scheme_domain.com/
rule_fixed_subpath.com/cgi-bin/
... will match the following URLs:
http://rule_fixed_scheme_domain.com/
http://rule_fixed_scheme_domain.com/test.jpg?foo=bar#frag
http://rule_fixed_scheme_domain.com
rule_flex_scheme_domain.com/
t.rule_flex_scheme_domain.com/test.jpg
http://t.rule_flex_scheme_domain.com/test.jpg
https://t.rule_flex_scheme_domain.com/test.jpg
http://rule_fixed_subpath.com/cgi-bin/
http://rule_fixed_subpath.com/cgi-bin/people
Note: The URL scheme (http://) is optional. If the scheme is omitted, the pattern can match any scheme, and allows partial matches
on the domain suffix.
Previewing the image URL patterns
Click Preview to see if the URLs match the image URL patterns
you've set. If the image URL matches a pattern, you'll see a
confirmation message. If the image URL does not match, an error
message appears.
Bharata has a great and detailed answer on this, but just wanted to add one addition that I identified with a similar issue.
We had a x-webkit-csp content security header that turned out to be the culprit.
Removing it and all worked through the image proxy.
Google's response was that x-webkit-csp is deprecated and to use the Content-Security-Policy header instead.
However this seems like a bug that an unsupported header throws a fatal error rather than simply ignoring it.
TL;DR: Make sure your server isn't blocking external connections (through AWS or .htaccess or some other firewall)!
I was having this problem too. I ran through every solution I could think of and every one I found online. Nothing fixed it.
Finally, I inspected the image in Gmail so that I could get the full CDN address Google creates for it. I tried to open that in a new tab and it failed. So I realized that Google wasn't able to grab the image.
In the end, I'd forgotten that I have the server locked down from all traffic except for my own (just a basic .htaccess IP deny). It's just a simple security layer I use while I'm in my development. Keep in mind that you might have it locked down within AWS or something more rugged like that.
I opened up all IPs for a minute, tested it, and sure enough it worked as expected. The old emails that were previously failing also worked, so it seems that Google tries to work their magic anytime the email is opened and they don't have the image saved. Once I closed the IP address again, the image continued to work whatever Google. I'm guessing once they write it to their CDN is remains there indefinitely.
So if you're certain that you've done everything else correctly, I would suggest making sure that the server is indeed open to the outside world so Google can process the image.
I had the same problem and I solved it by specifying the "https://" protocol in the "src" url of the img, otherwise by default "http" is prepended
I'm making a webapp for members of my caving club to search through and view cave survey note PDFs. It works fine, and I got the AppCache working for the web version of it.
However, since the PDFs are quite large and slow to download, and many members have the PDFs on their local machines from the same SVN the website gets them from, it would be ideal for them to be able to use a page with links to a local SVN folder of their choosing.
The design goals:
The site displays links to PDF files on the local filesystem
Whenever I add features to the site, users get them automatically the next time they open the page and they're connected to the internet
But after the first time they open the page, the site works offline.
Sadly web browsers don't appear to support this useful combination of design goals at once.
I can satisfy #1 by having users download a copy of the site, add their local SVN path in a JS, and open their local copy in the browser, so that file:/// links work.
I can satisfy #2 by having absolute links to JS bundles on the server.
I can satisfy #3 by using the AppCache.
I thought I could get clever by having the copy of the page on the local file system have <html manifest="https://myserver.com/myapp.appcache">, but unfortunately Chrome doesn't seem to allow a local file to use an app cache manifest hosted on a server, for seemingly no good reason to me.
Does anyone know of another way I could satisfy all 3 goals?
Perhaps there's some simple program/config I could give my friends that would intercept web requests to https://myserver.com/some/folder and instead serve them out of a folder on their local file system?
Andy,
I know this post is a bit old but came across it looking for something else related to AppCache. My understanding it that the html page and the manifest must reside in the same domain for it to work. So I think you need to modify your design:
Create a JavaScript function that acts as a setting for the user to enter the path to their local copy of the PDF's. Store this information in localstorage.
Create a html template page for the document links.
Create a JavaScript function that populates the html template page with any documents and links the user enters.
This way, the users visit your application online and it uses appcache to store itself and the JS files for offline use. To access the PDF's, the user clicks a settings button that launches a page to collect path information and saves the information in localstorage. The users can then access the template page which will populate with the documents they entered.
Here is a good intro to localstorage: [http://www.smashingmagazine.com/2010/10/local-storage-and-how-to-use-it/]
I am working on online community where users will have the profile page , where he can upload the image of his choice or give the url of the remote image .
So Is it good to just store the remote image url and not the image itself and use it like this on profile page like this
<img src="remote_image_url">
or download the image from remote url and store it on the local/our server for the later use like this
<img src="path_on_our_server">
i am thinking from the hack-proffing point of view , as are there any issues if i allow users to use remote image and use it as it is instead of downloading it on to our servers ?
You should store the image, loading a remote URL which you don't have control over is always dangerous.
To expand:
A user adds their avatar as www.example.com/pic.jpg. They then notice that you are simply including that URL on your site, so they change their avatar to www.example.com/hack.js and you still include this file so now any JS they add in that file will be ran on their site.
A embedded JS inclusion like this is a hackers dream and is DEFINITELY a HUGE security flaw. If you want to read a bit about a real life example of one of these attacks, ebay was caught out by one last year - http://www.infosecurity-magazine.com/news/ebay-under-fire-after-cross-site/
Think about what risks you are trying to mitigate.
Whether you let users upload images to your site or add links to remote image locations, bad people will do bad things. If you let people upload images to your server, there could be attack vectors against your server (vulnerability in image processing libraries triggered by deliberately malformed images). If you let people add links to remote images, the remote images could be malicious to target browser vulnerabilities (and your site then appears to be hosting malicious images).
If you care about people uploading profile images that are inappropriate then you will need active curation of some kind.
The Gravatar service specializes in hosting profile avatar images and has a Terms of Service squad to "police" avatar content.
http://gravatar.com
By using the user's avatar url in your code you're actually making all your visitors visit that user's site as well. The user will be able to track who looks at the image and when.
This is pretty much how analytic tools works. By requesting a resource from a third party site, the third party can track your users.
Can we specify to use a particular email client while using
<a href="mailto...
In my system it opens Microsoft Outlook. But what if someone does not have Outlook on his system? On such systems clicking the mailto link does nothing.
No you can't. You can specify the email address, subject and some other parameters for the mail client. But which mail client is started is something the browser decides. It would be quite a security risk if you could decide that as a web developer.
It will open in the system's default email client. If the user does not have one selected, there's nothing you can do about it.
There is much more you can use, but each system will act differently, for example in mine, I set up that all mailto links would open GMail.
mailto is a call to open the default mail browser, like using a link in a windows application will open the default browser and not a special browser if you have more installed.
The best way is always to create a form and send it by, either using the web server internal SMTP or using one of so many free scripts out there that sends everything in the form to a specify email.
and by the way, you can compose more than just the email address
<a href="mailto:me#domain.com?subject=Call me&body=Call me to this number:">
call me</a>
You, as the site author, have no say. A mailto: link is supposed to launch the user's default mail program. Some users don't have a mail program though (think webmail users.)
The solution is to not use mailto: links but instead create a server-side form on your site, that does the actual mail sending.
On a Windows machine the [HKEY_CLASSES_ROOT\mailto\shell\open\command] contains the path to what program will open mailTo links. As such, it's not always the default mail program. I agree with Balexandre's idea that a web form gives you the most control though.