Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
In my 4 years of experience,I have developed a lot of web applications. Now, the concept of programmable web getting more and more popular, new APIs are being released almost everyday. I would like to develop a java API/library for a few of these endpoints.Ex stackapps,reddit,digg etc... What I would like to know from you people is ,
How is the API of the regular web
apps differ from the API of these
libraries. Or what is the difference
between these two from design
perspective
What are the best API development
practices.
What are all the factors that I need to consider before designing the API
.
Please comment, if the details are not sufficient.
Stability
If you offer an API to your web app, it is probably because you want other people to build applications using it. If it is not stable they will hate you for forcing them to follow through your frequent changes. If this takes too long, their site might remain non-functional for a long time while they are figuring out the new way of doing things in your API.
Compactness
You want the API to be complete but compact, as in not too much to remember.
Orthogonality
Design it so there is one and only one way to change each property or trigger an action. Actions in an orthogonal API should have minimal (if ever) side effects.
Also, it's not a good practice to remove a feature from a public API once released.
Security and Authentication
Since the API is web-exposed, you will have to authenticate each request and grant appropriate access. Security common sense applies here.
Fast Responses or Break into pieces
I believe in a web environment we should have fast responses and avoid requests that will take too long to complete. If it's unavoidable then it is better to send an ACK and break the task into several pieces and subsequent calls.
From my experience, all good API were not made to solve a generic problem, but to solve a problem for some that requires a certain abstraction. This abstraction is then evolving as the requirement and/or the underlying layer change.
So instead of finding the API that will do it all, I'd start by finding one or two good case problem were your API could help.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I have a 10GB sql dbase and want to provide access to that data to a mobile app using a rest api.The mobile app will be used by less than 100 users. My DB is a bit sluggish as it was not built for so much data, but has grown over the years. My question is: Will the rest api create more burden for my DB?
Rest Api isn't gonna create any burden on DB if it's normal client, server things.
Let me give a quick example how's rest api works.
Client<---(REST API protocol)----->server<-----(Do query optimization to improve performance of your db and similar kind of optimization)------>db
So before Rest Api, server used to keep some data of client mostly known as session data. But it was creating a burden for server as more memory use and also it was dependent on states of user in somewhat way. mean to do certain operations user has to follow a certain steps before.
But in rest api architecture, every method/call is independent of previous call.
so basically REST architecture is an another design to communicate between 2 or more (services , clients whatever ).
So I don't see that rest api is gonna affect your db. (though again it depends on your product/service architecture design and developers quality etc.)
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I'm asking because I am not sure what kind of person we'll need to hire (ASP? Sitecore? Angular? JQuery) to implement the following for us:
Our school is looking to make data on courses (JSON format, about 600 courses) available as an “online catalog.” The static info (programs information, resources, etc.) will be hosted in Sitecore 7.
We’d like to see the online course catalog closely integrated with the rest of the site, so we’re looking for best approaches on how to do that.
Some manipulation of the JSON data is required: course detail pages should be simple enough, but we’ll also need to have course listings (not necessarily displaying all 600 courses at once, in one long list, but segmented by programs, class formats & locations, etc) as well as a “course search” functionality.
Would Sitecore do that well enough out-of-the-box, or would it be better/easier to go with something like Angular JS on top of Sitecore?
Please ask me for additional info if I had left something important out or if anything is unclear.
I agree with Dijkgraaf comment but to provide you with answer; Sitecore is suitable for your requirements but is a framework which means out of the box it won't meet your requirement so you will need a Developer who knows Sitecore and by extension .NET (Sitecore is built on .NET).
These developers will also know how to work with JSON, most likely serving it up from Sitecore via a .NET technology called Web API. The JSON can then be manipulated with Javascript or AngularJS. It is not as common for Sitecore developers to be familiar with AngularJS however.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I was just thinking of possible ways to go about temporary login systems. I was thinking having a bunch of your standard images with a jumbled up word and users type in the word. I would have a MySQL table where all the photos have a unique id, link and answer-key. that way the webpage just has to choose a random number the GET photo where id = random number. then compare what the user types in to the answer key of the photo.
I'm not currently trying to create this system, it seems very simple and I was just trying to think if it is a secure system that would work.
so my question really is, would there security risks with this, is it robust enough to keep out bots, would my site be destroyed 10 seconds after implementing it.
What you're describing sounds exactly like a CAPTCHA system. These are used widely to prevent bots from issuing automated requests against an interface. The problem is that it's hard to make images that a bot can't just interpret anyway.
Outsmarted: Captcha security not much of a gotcha is an article about some Stanford researchers who developed an image-recognition tool (which is not publicly available) to test captcha implementations:
Decaptcha was able to decode 66 percent of the Captchas used by Visa's Authorize.net payment site, 70 percent of Blizzard Entertainment's Captchas -- the company's games include World of Warcraft and Diablo -- and 25 percent of Wikipedia's. About one-fifth of Digg.com's Captchas and almost that many of CNN.com's were decodable.
The researchers recommended Google's reCAPTCHA as a much more effective system. You can add a reCAPTCHA widget to your own website. This would be safer and easier than trying to develop your own and find it to be too weak.
Short answer: No, it's not secure. If someone really wants to hack your system he can build his own database of image-word.
The key is to invest in security less than it will cost you if your system will be compromise, so I won't invest in a security system too much (it sounds like you don't really have a sensitive information to hide).
BUT, you have an easy & free solution. You can use reCaptcha, not only it's much more secured, you'll help digitize some useful information.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I am developing an open source desktop twitter client. I would like to take advantage on the new xAuth authentication method, however my app is open source which means that if I put the keys directly into the source file, it may be a vulnerability (am I correct? The twitter support guy told me).
On the other hand, putting the key directly into a binary also doesn't make sense. I am writing my application in python, so if I just supply the pyc files, it is one more seconds to get the keys, thanks to the excellent reflection capatibilities of Python. If I create a small .so file with the keys, it is also trivial to obtain the key by looking at the raw binary (keys has fixed length and character set).
What is your opinion? Is it really a secutiry hole to expose the API keys?
Security hole? In broad terms, yes. Realistically though, these aren't nuclear launch codes we're talking about.
About the worst thing that could happen is that someone could take and use your app's keys to do something against Twitter's TOS that will end up getting the keys banned. No user data would be vulnerable since you're not distributing the user tokens (that would be much worse from a security standpoint). Since anyone can register an app in 2 seconds at no cost, the only reason to do that kind of impersonation would be specifically to besmirch the reputation of you or your app.
One thing you could do is leave them out of the source code but make it clear that user's compiling from source need to obtain their own keys and put them in the appropriate place, but leave them in the binary version that you distribute. Not 100% secure, but makes it that little bit harder that will deter a certain number of n'er-do-wells.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
The crawler needs to have an extendable architecture to allow changing the internal process, like implementing new steps (pre-parser, parser, etc...)
I found the Heritrix Project (http://crawler.archive.org/).
But there are other nice projects like that?
Nutch is the best you can do when it comes to a free crawler. It is built off of the concept of Lucene (in an enterprise scaled manner) and is supported by the Hadoop back end using MapReduce (similar to Google) for large scale data querying. Great products! I am currently reading all about Hadoop in the new (not yet released) Hadoop in Action from manning. If you go this route I suggest getting onto their technical review team to get an early copy of this title!
These are all Java based. If you are a .net guy (like me!!) then you might be more interested in Lucene.NET, Nutch.NET, and Hadoop.NET which are all class by class and api by api ports to C#.
You May also want to try Scrapy http://scrapy.org/
It is really easy to specify and run your crawlers.
Abot is a good extensible web-crawler. Every part of the architecture is pluggable giving you complete control over its behavior. Its open source, free for commercial and personal use, written in C#.
https://github.com/sjdirect/abot
I've discovered recently one called - Nutch.
If you're not tied down to platform, I've had very good experiences with Nutch in the past.
It's written in Java and goes hand in hand with the Lucene indexer.