Need to have many different URLS resolve to a single web page - html

And I don't want to use GET params.
Here are the details:
the user has a bunch of photos, and each photo must be shown by itself and have a unuique URL of the form
www.mysite.com/theuser/photoNumber-N
I create a unique URL for a user each time they add a new photo to their gallery
the web page that displays the user's photo is the same code for every user and every photo -- only the photo itself is different.
the user gives a URL to Person-A but then Person-A has one URL to that one photo and cannot see the user's other photos (because each photo has a unique URL and Person-A was given only one URL for one photo)
I want the following URLS to (somehow) end up loading only one web page with only the photo contents being different:
www.mysite/user-Terry/terryPhoto1
www.mysite/user-Terry/terryPhoto2
www.mysite/user-Jackie/JackiesWeddingPhoto
www.mysite/user-Jackie/JackiesDogPhoto
What I'm trying to avoid is this: having many copies of the same web page on my server, with the only difference being the .jpeg filename.
If I have 200 user and each has 10 photos -- and I fulfill my requirement that each photo is on a page by itself with a distinct URL -- right now I've got 2000 web pages, each displaying a unique photo, taking space on my web server and every page is identical and redundant disk-space-wasting HTML code, the only difference being the .JPEG file name of the photo to display.
Is there something I can do to avoid wasting diskspace and still meet my requirement that each photo has a unique URL?
Again I cannot use GET with parameters.

If you are on an Apache server, you can use Apache's mod_rewrite to accomplish just that. While the script you are writing will ultimately still be fetching GET variables (www.mysite.com/photos.php?id=photo-id), mod_rewrite will convert all the URL's served in the format you choose (www.mysite.com/user-name/photo-id).
Some ways you can implement it can be found here and here, while the actual documentation on the Apache module itself can be found here.

Go to IIS Manager. Go to the site hosted in IIS. Add additional binding for each url.
This will redirect all request the same location.

Related

How to download a Document from ipaper swf

Hi guys I am trying to download a document from a swf link in ipaper
Please guide me on how can I download the book
Here is the link to the book which I want to convert to pdf or word and save
http://en-gage.kaplan.co.uk/LMS/content/live_content_v2/acca/exam_kits/2014-15/p6_fa2014/iPaper.swf
Your kind guidance in this regard would be appreciated.
Regards,
Muneeb
first you open the book in your browser with network capturing (in developer/s tools).
you should open many pages at diffrent locations with and without zoom
then look in the captured data.
you will see that for each new page you are opening, the browser asks for a new file (or files).
this means that there is a file for each page and with that file your browser is creating the image of the page. (usually there is one file for a page and it is some format of picture but I encountered base64 encoded picture and a picture cut into four pieces).
so we want to download and save all the files that are containing the book's pages.
now, usually there is a consistent pattern to the addresses of the files and there is some incrementing number in it (as we can see in the captured data the difference between following files), and knowing the number of pages in the book we can guess ourselves the remaining addresses till the end of the book (and of course download all the files programmatically in a for loop)
and we could stop here.
but sometimes the addresses are bit difficult to guess or we want the process to be more automatic.anyway we want to get programmatically the number of pages and all the addresses of the pages.
so we have to check how the browser knows that stuff. usually the browser downloads some files at the beginning and one of them contains the number of pages in the book (and potentially their address). we just have to check in the captured data and find that file to parse it in our proram.
at the end there is issue of security:
some websites try to protect their data one way or another (ussually using cookies or http authentication). but if your browser can access the data you just have to track how it does it and mimic it.
(if it is cookies the server will respond at some point with Set-Cookie: header. it could be that you have to log-in to view the book so you have to track also this process. usually it's via post messeges and cookies. if it is http authentication you will see something like Authorization: Basic in the request headers).
in your case the answer is simple:
(all the files names are relative to the main file directory: "http://en-gage.kaplan.co.uk/LMS/content/live_content_v2/acca/exam_kits/2014-15/p6_fa2014/")
there is a "manifest.zip" file that contains "pages.xml" file which contains the number of files and links to them. we can see that for each page there is a thumb, a small, and a large pictures so we want just the large ones.
you just need a program that will loop those addresses (from Paper/Pages/491287/Zoom.jpg to Paper/Pages/491968/Zoom.jpg).
finally you can merge all the jpg's to pdf.

Website Images dont get loaded, URL looks cryptic

So on a website Iam working on some images dont get loaded.
The console says: Failed to load resource: net::ERR_NAME_NOT_RESOLVED
and then a link like that:
http://s234127563.online.de/wp-content/uploads/2014/04/myimage.png
If I change the first part of the URL http://s234127563.online.de/ to the actual URL http://example.org/ the images get shown
Does anybody know what this problem is about? Maybe some DNS thing or something. I tried different browsers and to renew my ip address and flush dns etc. but nothing changed
Looks like a mismatch in your database. WordPress stores your base URL to generate permanent links etc.
Change the URL stored in WordPress. There’s a page dedicated to that on the WordPress Codex. Hardcoding the URL in wp-config is most reliable, but perhaps not most desired.
The URLs are saved in posts etc, so you may have to update those. The Velvet Blues Update URLs plugin can do this for you.
Manually update non-default fields (theme options, custom fields, etc)
If none of the above works... are you using a CDN of some sort?
Check your database. In the wp_options table there should be 2 rows, one called siteurl and the other one called home. Make sure both are set to be your domain name, so http://example.org/
Wordpress sets this URL at the start during installation, and if you for example had this URL set to something different before, in case of a migration to a different domain name or something else, it can differ from the domain name the website is currently being shown on.

Handling HTML PDFs with Auth Required Images

I'm currently creating pdf documents server side with wkhtmlpdf and nodejs. The client side sends the html to be rendered (which may include img tags with a source). When the user is previewing the html in the browser the images they uploaded to their account show fine because the user is authenticated via the browser and the node route can simply look up the image based on the user id (saved to session) and image id (passed in each image request).
The issue is when the images are attempting to be rendered in wkhtmltopdf webkit the renderer is not authenticated when it makes the request for images via node's exec of wkhtmltopdf in a separate process. A request to something like GET /user/images/<imageId> will fail due to the session not being set when the request is made inside the headless wkhtmltopdf renderer.
Is there a way to pass authentication via some wkhtmltopdf option or possibly a different way of authentication for images? The only restriction is not making images public.
I asked a similar question a while back that might help you:
Generate PDF Behind Authentication Wall
WKHTMLTOPDF has --cookie-jar which should get you what you need. Note that it didn't for me, and I wound up answering my own question with an alternate solution. In a nutshell, I wound up accessing the page via CURL - much more flexible - then writing a temporary that I converted to PDF, then deleted the temporary file.
A little round-a-bout, but it got the job done.
To implement authentication I allowed a cookie id flag ( with connect the key defaults to connect.sid ) as a query option in my image routes. The only "gotcha" is since images are requested from the server's perspective, you must ensure all your image paths are absolute domain paths rather than relative to your application ( unless those two are the same of course).
Steps for Expressjs:
Setup the id flag middleware which checks for say sid in the query via req.query (eg ?id=abc123 where abc123 is the req.cookies['connect.sid'], or req.signedCookies['connect.sid'] if your using a secret as you probably should )You may need to ensure the query middleware is setup first.
Ensure the req.headers contains this session id key and value prior to the cookie parser so the session is properly setup (eg if a cookie exists append a new one or if one does add it as the first req.headers.cookie = 'connect.sid=abc123;')
Ensure all image paths contain the full url (eg https://www.yourdomain.com/images/imageId?id=abc123)
Some extra tid bits: The image source replacement should probably happen at the server level to ensure the user does not copy/paste the image url with the session id and say email it to a friend which obviously leaves the door open for account hijacking.

linking from one server to another but keeping the same domain name

I have a splash page hosted at www.someserver.com and I'm looking to have one link on the page lead to a site hosted on another server, www.anotherserver.com. I probably need to keep these on distinct servers for the time being.
I'm hoping to have the whole thing appear under a a single company domain, www.company.com, and to have it so that when a user clicks on this link, you see neither www.someserver.com nor www.anotherserver.com, just www.company.com.
I know that I can set up some kind of domain masking (we're using GoDaddy for our hosting), but I'm worried that clicking out to www.anotherserver.com is going to keep this from all appearing under the same domain.
Is there a way to set this up so that links from and to both of these servers appear as www.company.com?
The good news is that what you are wanting to do is very common, it is called "proxying". I believe that godaddy uses apache so specifically you will want to look for mod_rewrite. You will more than likely simply have to provide a .htaccess file with the rewrite rules in it.
Here are the gotchas that you will run into though:
Client hits www.company.com
www.company.com pulls content from www.anotherserver.com
www.company.com sends content back to the client.
The issues come in with the data that www.anotherserver.com sends back to the client. Enter the world of relative and absolute paths.
Let's say that the actual request is www.company.com/widget:
Client hits www.company.com/widget
www.company.com requests www.anotherserver.com/widget
www.anotherserver.com/widget returns a page but on that page it has a link to www.anotherserver.com/widget/image.jpg
www.company.com returns the content back to the client but now the client has a link to www.anotherserver.com/widget/image.jpg
You will need to make sure that your backend servers use relative paths instead of absolute paths. http://en.wikipedia.org/wiki/Path_(computing)

Is it possible to put binary image data into html markup and then get the image displayed as usual in any browser?

It's an important security issue and I'm sure this should be possible.
A simple example:
You run a community portal. Users are registered and upload their pictures.
Your application gives security rules whenever a picture is allowed to be displayed. For example users must be friends on each sides by the system, in order that you can view someone else's uploaded pictures.
Here comes the problem: it is possible that someone crawls the image directories of your server. But you want to protect your users from such attacks.
If it's possible to put the binary data of an image directly into the HTML markup, you can restrict the user access of your image dirs to the user and group your web application runs of and pass the image data to your Apache user and group directly in the HTML.
The only possible weakness then is the password of the user that your web app runs as.
Is there already a possibility?
There are other (better) ways, described in other answers, to secure your files, but yes it is possible to embed the image in your html.
Use the <img> tag this way:
<img src="data:image/gif;base64,xxxxxxxxxxxxx...">
Where the xxxxx... part is a base64 encoding of gif image data.
If I needed security on my images directory I wouldn't expose the directory at all. Instead my img src attributes would reference a page that would take a userid and an image id as a parameter.
The page would validate that that user did indeed have access to see that picture. If everythings good, send the binary back. Otherwise send nothing.
for example:
<img src="imgaccess.php?userid=1111&imgid=223423" />
Also, I wouldn't use guessable id's. Instead sticking to something like base 64 encoded guid's.
I'm not sure I understand, but here goes. Instead of serving up static images that reside in an images folder - why couldn't you, using your server side technology of choice, have the images dynamically sent down to the client? That way your server side code can get in the mix and allow or deny access programmatically?
<img src="/images/getImage.aspx?id=123353 />
You could move the pictures out of the document root into a private directory and deliver them through your application, which has access to that directory. Each time your app generates an image tag, it then also generates a short-lived security token which must be specified when accessing a particular image:
<img src="/app/getImage.xyz?image=12345&token=12a342e32b321" />
Chances are very rare that someone will brute force the right token at the right time with the right image.
There are at least to possibilities to verify the token in "getImage":
Track all image tags in your app and store records in a database which link the randomly generated tokens and image IDs to the requesting users. The "getImage" action then checks the supplied parameters against that database.
Generate the token as a checksum (MD5, CRC, whatever) over the user ID, the image ID and maybe the current day of the year, and be sure to mix in an unguessable salt. The "getImage" action will then recompute the checksum und check it against the specified one in order to verify the user's access. This method will produce less overhead than the first one.
PHP example:
$token = md5($_SESSION['user_id'].' '.$imageID.' '.$SECRET_SALT.' '.date('z'));
With HTML5 you could use the canvas tag and JavaScript to do this.
You could perhaps do something with either CSS or a table layout to draw a picture (probably really bad performance, resolution, portability).
Either way, there is no stopping people from taking your pics. They could take a screenshot and crop it out.
As Chris mentioned in his answer, having long picture id's so that the URL for each image is not easy to guess or brute force is important. And no directory listing on your webserver directories is also.
https://www.base64-image.de/
I used this website to generate base64 code fir given image, and then this website provide code to directly paste . It worked.