What adds the string ".json" to a url? - json

It's a silly question, but we have noticed one user pretty constantly appends ".json" to the URL when navigating our website. This appended string breaks our url signature validation, so this user is being rejected quite a lot (and it's showing up in my error log daily, you decide which is worse).
I'm sure there's a browser plugin or something doing it, but I just can't figure out what would cause it.
We have a ColdFusion website that passes a few url params between pages, and often makes ajax get requests for JSON, but we don't ever append .json to the url.
Can you think of what might be causing this, or where I can look for an answer? If/when I know what might be doing this then I might ask another question about appropriate ways to handle it.
Thanks all!

You need to find out a bit more about your user to understand the motivations. Look out for browser, OS, origin IP for example. If it's all within your normal user behaviour then potentially can be something on your customer device. If it is completely outside your user's normal behaviour might be that you are "under attack" and somebody or something is trying to find vulnerabilities in your website.
Cheers

Related

Is it possible to bypass a layer of security to access a given page of a website?

A lot of web3 websites are emerging were we need to have, for example, a nft of a given collection to access some subpages of a web app.
But I don't understand something. Since we have the code of the web app, could someone just download the html source, remove the nft check, in that case, and access the subpage?
I know that it isn't that simple to modify an app, especially if they use something like React, but would that be feasible?
I guess not, otherwise they won't do it like this, but I don't get why it isn't possible.
Theoretically, yes they could. Any code that is written on the client-side can be tampered with. If the code on the client-side contains the information necessary to display a particular element, then from a security standpoint, you could assume that someone who's interested enough could reverse-engineer the code and force that element to display, even if you have JavaScript checks that are intended to block it.
The best way for such an application to actually stop important data from being displayed to the user is to only send the data once the server validates that the given user has permission to access something. If the data never gets sent to the client to begin with, and the server is secure there isn't really anything they can do about it.
Another way is to make it more difficult for clients to reverse-engineer the code, by minifying the code and having a whole lot of it. Minified code is very hard to read. For example, it could take quite some effort to read through 6000 lines of minified code and figure out enough about what's going on to make the changes you want. (Someone may get lucky and happen to stumble onto something close to the listeners they can see attached to elements on the page that gives them what they want - but they may not.)
In general: don't trust anything done on the client-side. The only way to be absolutely certain that data isn't displayed to those who shouldn't be able to see it is to never send it to the client in the first place.

Why is there a difference between get and put requests? [duplicate]

Back when I first started developing client/server apps which needed to make use of HTTP to send data to the server, I was pretty nieve when it came to HTTP methods. I literally used GET requests for EVERYTHING.
I later learned that I should use POST for sending data and GET for requesting data however, I was slightly confused as to why this is best practice. From a functionality perspective, I was able to use either GET or POST to achieve the exact same thing.
Why is it important to use specific HTTP methods rather than using the same method for everything?
I understand that POST is more secure than GET (GET makes the data visible in the HTTP URL) however, couldn't we just use POST for everything then?
I'm going to take a stab at giving a short answer to this.
GET is used for reading information. It's the 'default' method, and everything uses this to jump from one link to the next. This includes browsers, but also crawlers.
GET is 'safe'. This means that if you do a GET request, you are guaranteed that you will never change something on the server. If a GET request could cause something to delete on the server, this can be very problematic because a spider/crawler/search engine might assume that following links is safe and automatically delete things.
This is why we have a couple of different methods. GET is meant to allow you to 'get' things from the server. Likewise, PUT allows you to set something new on a server and DELETE allows you remove something.
POST's biggest original purpose is submitting forms. You're posting a form to the server and ask the server to do something with that form.
Any client (a human/browser or machine/crawler) knows that POST is 'unsafe'. It won't do POST requests automatically on your behalf unless it really knows it's what you (the user) wants. It's also used for things like are kinda similar to submitting forms.
So when you design your website, make sure you use GET only for getting things from the server, and use POST if your ajax request will cause 'something' to change on the server.
Fun fact: there are a lot of official HTTP methods. At least 30. You'll probably only use a very few of them though.
So to answer the question in the title more precisely:
Why are there multiple HTTP Methods available?
Different HTTP methods have different rules and restrictions. If everyone agrees on those rules, we can start making assumptions about what the intent is. Because these guarantees exists, HTTP servers, clients and proxies can make smart decisions without understanding your specific application.
Suppose, You have one task app in which you can store data, delete data. Now suppose the route of your web page is /xx so to get the webpage, to store the data using add button , to delete the data using delete button you will send requests to /xx but how web server will know whether you are asking for web page or you want to add data or you want to delete because /xx is the same for all requests that's why we have different web requests browser always sends request name(GET,POST,PUT,DELETE) in header to server so server can understand what you need.

HTML- Required "bug" [duplicate]

Using a simple tool like FireBug, anyone can change javascript parameters on the client side. If anyone take time and study your application for a while, they can learn how to change JS parameters resulting in hacking your site.
For example, a simple user can delete entities which they see but are not allowed to change. I know a good developer must check everything on server side, but this means more overhead, you must do checks with data from a DB first, in order to validate the request. This takes a lot of time, for every action someone must validate it, and can only do this by fetching the needed data from DB.
What would you do to minimize hacking in that case?
A more simple way to validate is to add another parameter for every javascript function, this parameter must be a signature between previous parameters and a secret key.
How good sounds the solution above to you?
Our team use teamworkpm.net to organize our work. I just discovered that I can edit someone else tasks by changing a javascript function (which initially edit my own tasks).
when every function call to server, in server side before you do the action , you need to check if this user is allowed to do this action.
It is necessary to build server-side permissions mechanism to prevent unwanted actions, you may want to define groups of users, not individual user level, it makes it easier.
Anything on the client side could be spoofed. If you use some type of secret key + parameter signature, your signature algorithm must be sufficiently random/secure that it cannot be reverse engineered.
The overhead created with adding client side complexity is better spent crafting proper server side validations.
What would you do to minimize hacking in that case ?
You can't work around using validation methods on the server side.
A more simple way to validate is to add another parameter for every javascript function, this parameter must be a signature between previous parameters and a secret key.
How good sounds the solution above to you ?
And how do you use the secret key without the client seeing it? As you self mentioned before, the user easily can manipulate your javascript, and also he can read everything in javascript, the secret key, too!
You can't hide anything in JavaScript, the only thing you can do is to obscure things in JavaScript, and hope nobody tries to find out what you try to hide.
This is why you must validate everything on the server. You can never guarantee that the user won't mess about with things on the client.
Everything, even your javascript source code is visible to the client and can be changed by them, theres no way around this.
There's really no way to do this completely client-side. If the person has a valid auth cookie, they can craft any sort of request they want regardless of the code on the page and send it to your server. You can do things with other, encrypted cookies that must sent back with the request and also must match the inputs on the page, but you still need to check this server-side. Server-side security is essential in protecting your application from unauthorized access and you must ensure, server-side, that every action being performed is one that the user is authorized to perform.
You certainly cannot hide anything client side, so there is little point in trying to do so.
If what you are saying is that you are sending something like a user ID and you want to ensure that the returned value has not been illicitly changed then the simplest way of doing so it probably to generate and send a UUID alongside it, and check on return that the value of the uuid matched that stored on the server for the userID before doing any further processing. The space for uuid's is so large that you can discount any false hits ever occurring.
As to actual server side processing vulnerabilities:- you should simply always build in your security/permissions as close to the database as you can, and defiantly not in the client. There's nothing different in the scenario you outline from any normal client-server design.
Peter from Teamworkpm.net here - I'm one of the main developers and was concerned to come across this report about a security problem. I checked into this and I am happy that is not possible to delete a task that you shouldn't have access to.
You get a message saying "You do not have permission to delete this task".
I think it is just the confusion between being a Project Administrator and being an overall Administrator that is the problem here :- You may not be a member of a project but as an overall administrator, you still have permission to delete any task within your Teamwork site. This is by design.
We take security very seriously and it's all implemented server side because as Jens F says, we can't reply on client side security.
If you do come across any issues in TeamworkPM that you would like to discuss, we'd encourage any of you to just hit the feedback link and you'll typically get an answer within a few hours.

What is the general consensus on user-error correction for web apps?

I'm building a RoR site, and today I get the pagination done. Upon showing it to my coworker, his first question is "what happens if you set the querystring to "?page=-1". It died with a runtime exception (error 500). He suggested that that should definitely be fixed before this site goes anywhere near live.
I happen to disagree with him (hear me out). Now, I've been in the web dev business for all of four months, so I very well could be wrong. But I would think that this isn't a big deal. I would think that, so long as said errors do not constitute a security risk, things like this shouldn't be a priority. The only way to cause this error is if you manually edit the query string, and, well, garbage in garbage out. If you're smart enough to know that you even can edit the querystring, you should be smart enough to not give it a negative number.
What is the general consensus on things like this? Do you completely idiot proof the site, so that no matter what the query string is, you never generate an error? Do you let things slide so long as it works the way it's supposed to (and doesn't expose a security risk)? Somewhere in the middle?
EDIT: Somehow my question didn't really come out completely as I intended it. The crux of my question was, where to draw the line between proactively correcting for things versus not doing them. If there's invalid input in the get string, for instance, would it be better practice to display a tasteful error as suggested in the posted replies, or to try to figure out what the user was doing, and do that. Or, as a more concrete example: If a user sets page=-1 in the get string, would it be better to silently assume they meant page=0, or to display some kind of tasteful error page saying somethign like "invalid page specified"?
You should be error checking anything that comes in from the query string. If you get an invalid page number, you should have an error message that's a little more graceful than the Error 500 page. Maybe a sorry, bad request. Try this: <possible suggestions>. It's just plain sloppy and unprofessional to knowingly and deliberately leave an easily accessible error like that on a live site.
You say you're new to web apps, but if your previous dev experience was other GUI apps being used by the "general public" (non-developers, non-techies), would it have been OK to have stack traces thrown into the user's face as the app falls apart around them? In my experience, this is never really acceptable.
You make some good points, but an incorrect query string can have many reasons. For example, a link to a record that has since been deleted. Or a Google result pointing to a page that doesn't exist in the current result set any more.
In these cases, you should show the user something a bit more verbose than a 500 error.
If you have an error-page that looks nice, and gives a polite message, I'd say it's fine. Though I might consider responding with a 404 instead. Garbage in should preferably not produce an error.
I don't think a 500 error page is very meaningful to your average user. At least tell him something is wrong with your page and guide him back on the right track by providing a link to get back to your site.
Sometimes I redirect users to a page that is likely to what he wanted. So when a query goes below zero and this is not permitted, redirect your user to ?page=0 and maybe display a message on top of that page. I think you should prefer this method because it is a better approach in terms of user experience to not use modal windows.
I agree with you, that error messages are necessary and useful but you should try to differentiate, e.g. give an 404 where the user requested a page that doesn't exist.
It varies from project to project. How many users do you expect? If it's below 10K visitors a day it might not be so bad. What percentage of users do you expect will hit the problem? I don't expect that very many but you would know best.
The goal should be to ship the product and roll out improvements regularly. Hopefully the product is sound overall.
Regarding a solution, if its a page not found, a 4xx error should be thrown instead of a 5xx. 5xx errors typically warrant a deeper look and while it's hard to write an air-tight application directly on launch, you should try to have a generic handler for 4xx and 5xx errors.
In the PCI game (Credit Card Verification / Validation) the rule is validate everything and allow for no idiots. So the answer depends on your application.

Detecting what changed in an HTML Textfield

For a major school project I am implementing a real-time collaborative editor. For a little background, basically what this means is that two(or more) users can type into a document at the same time, and their changes are automatically propagated to one another (similar to Etherpad).
Now my problem is as follows:
I want to be able to detect what changes a user carried out onto an HTML textfield. They could:
Insert a character
Delete a character
Paste a string of characters
Cut a string of characters
I want to be able to detect which of these changes happened and then notify other clients similar to "insert character 'c' at position 2" etc.
Anyway I was hoping to get some advice on how I would go about implementing the detection of these changes?
My first attempt was to consider the carot position before and after a change occurred, but this failed miserably.
For my second attempt I was thinking about doing a diff on the entire contents of the textfields old and new value. Am I missing anything obvious with this solution? Is there something simpler?
It is a really hard work make this working today, for several reasons, but
maybe you will need to restrict only to some browsers. read: https://developer.mozilla.org/en/XUL/Attribute/oninput the alternative to "oninput" is listen to all input events (keyboard, mouse, dragdrop) I suggest to use "oninput"
html is not perfect... even html5. input and textareas supports only single-range
selections. you can solve this using designmode/contenteditable instead of
textareas/textfield
detecting offsets of what changed is a hard work: read
-- https://developer.mozilla.org/en/Document_Object_Model_%28DOM%29/window.getSelection
-- http://www.quirksmode.org/dom/range_intro.html -- http://msdn.microsoft.com/en-us/library/ms535869%28v=VS.85%29.aspx -- http://msdn.microsoft.com/en-us/library/ms535872%28v=VS.85%29.aspx
you may need a "diff" algorithm written in javascript! http://ejohn.org/projects/javascript-diff-algorithm/
one personal note: detecting words, characters changes may be totally non-sense and not useful, detect instead paragraphs changes, or in case of an excel-like worksheet, the single cell
I hope this helps
feel free to correct my English!
My pseudocode/written out response would be (if I understand your question exactly) to use jQuery to detect keyup events and then save the input to the server via ajax, then also take the response and post it back to the input. This isn't very efficient, but basically the idea is that you're constantly posting and checking what else has been posted. If you want to see what someone else is doing in real time, you can ping the server every second or so and update with the response.
All of this of course can be optimized, but it still is kind of taxing for a server. You could also see if you can implement Google Topeka Wave for your project, or get in touch with Google Topeka to see how they do it :)