Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I'm working on a web based application which uses a JSON over HTTP based API to communicate between the server and the client. The goal being that multiple clients can be developed with different goals (online web client, offline desktop client, or third party created) using the same online data to be shared through this web service.
Right now the communication between the client and server is sent via POST only through a system that works well. I read quite a bit of information about REST and being RESTful with HTTP using PUT, GET, POST, and DELETE. I could separate my API into these different categories, but it means more code and some changes to the API.
My question is: how important is it for a HTTP based API be RESTful? Is it a recommendation, an option, or a near necessity?
Thanks in advance.
As a die-hard RESTafarian I'd say use HTTP (the REST protocol in question) to its full extent. Why? Well, I'll show you two snippets from an email exchange I had yesterday with a good friend of mine who's seriously clever (and used to be a professor of IT, still lectures, still kicks ass wherever he goes) ;
Yesterday I passed an important
milestone for my mappodrhom
application: I can now launch
long-running background computations
into a worker pool. When they finish,
the workers POST back their results
directly into the REST resources.
Which triggers more background
processing, all controlled by a
dependency graph.
And the interesting aspect is that
this RESTful backgrounding is
actually independent from my
particular application. But I am
currently too tired to completely
grasp the consequences :-)
The consequences in question are huge (it's a REST framework with lots of little stacks and events and services and apps, all with their own discoverable URIs, all with the same unified interface), and in terms of extensibility and scalability it is simply unmatched in its simplicity. If your application is a dinky little thing that will never travel places or meet hot chicks, yeah maybe you don't need it. But then, I've said the same, and all of a sudden found myself on a train to Paris with a cute girl that is a secret spy for the Russians, and well, one thing led to another...
Here's my reply, with some of my own experiences ;
I think this sounds (pardon my French)
f***ing awesome! I'm experiencing
similar things with my own REST stuff,
where because the middle layer is so
thin and transparent, I can just
extend things the way I need them
without worrying too much about the
infra-structure. It's such a freedom,
such a kick-ass cool thing that my
brain is about explode, and a
worrisome curiosity to why more aren't
doing it?
In short, doing REST only half-way is just like not really doing it at all. You're just shifting your stuff over a different pipeline, missing out on a simplified API into a state-machine, semantics- and implementation decoupling at the core, working with the principles that built the net (and hence I'd say you've got rather proven ideas behind you), the unified interface, and having URIs as part of your modeling.
I know it's popular to say that you can pick and choose, that it's all just options. It's not. REST only makes sense by using it fully, but as to convincing you to actually stretch your brain a bit further and do something clever, I can only dare you to cut through the FUD (that it's all about RPC, only GET and POST necessary, you don't need it all, equivalent to JSON, SOAP and other ilk, etc.), and be smarter about how you make applications. Yeah, I dare you all!
Unless you are going to take advantage of hypermedia then I wouldn't bother trying to conform to the REST constraints. Hypermedia is the piece of the puzzle that makes the system greater than the sum of its parts.
You are free to pick and choose which of the REST constraints you want to respect in your architecture, just note that to be able to call the end result RESTful, the only optional constraint is "code-download".
It's an option amongst several for exposing a web application as a web service with a well-defined API. Other options include:
No API - The application has no real way to be used as a component in other distributed systems
SOAP - An XML format for defining API remote calls
JSON - A compact format for information exchange that can be built on to create a custom API format (or used to build a REST system if you wanted)
Many other forms of remote procedure calls and information exchange mediums.
REST has a nice ideal behind it, but that doesn't mean you have to provide a REST API for your application. If the gain isn't worth the extra effort, don't bother.
It is a recommendation. I am glad you did not go into how RESTful you need to go into as there is something called Hi-Rest and Lo-REST. You can get more information from googling. Some industry veterans I know do not much care about this, but I do find staying as close to html and http will help you in the long run and simplifies many things.
I would submit that it's a nice to have but not a must. In my experience adding this architecture increases the scope and complexity of the project, but it does add a degree of elegance to the whole. I would say if you've got the time and budget on the project, go for it, if not don't worry about it.
One (of the many) things to consider with service API's is the ease in which they can be consumed by your end user. REST is gaining a very strong tooling presence.
By far the largest dev group out there is the .NET dev group, and with ADO.NET for services (Astoria) consuming REST using Linq is very simple and very elegant.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I am working on a project that has very complex integration needs, specifically with receiving and sending EDI data and all the "fun" stuff that happens in between. I can definitely focus efforts around data processing (validation, required fields, transformation), but the problem I am having is how to frame stories and epics in the backlog to plan and track work.
It is very easy to say "As a manager, I can deny a vacation request so that I can make sure that I have enough workers on staff to meet my commitments." Actually, I am very very good at this, but I am very new to this kind of integration effort.
For a big integration project, it is tougher to indicate who the user is, and what the value is. The EDI integration are just interface (non-functional) requirements, but the implementation is a big effort.
Can anyone provide some guidance on how to structure / frame these kinds of requirements in the product backlog I am creating?
Mike Cohn has something to say about this, I think this last paragraph is very relevant
But, you should be careful not to get obsessed with that template. It’s a thinking tool only. Trying to put a constraint into this template is a good exercise as it helps make sure you understand who wants what and why. If you end up with a confusingly worded statement, drop the template. If you can’t find a way to word the constraint, just write the constraint in whatever way feels natural
Scrum does not specify that requirements should be written as user stories and you should use what ever technique best works for you. If you are battling with "AS A" type stories, try "IN ORDER TO AS A I WANT ". If that does not use, use use case modeling.
Requriments are not contracts, but placeholders for communication. The key here is to have just enough information for planning purposes giving the team a sense of knowing what has to be done. The details can be discussed in sprint.
What I do in situations like this is:
1) Try and come up with the simplest bit of end-to-end functionality that we can implement for the integration.
2) Try and come up with a use case for that integration
3) Translate that into stories (optional step: Stories aren't a law of physics. You use 'em if they're useful.)
For example:
1) Okay - looks like authentication is the most trivial thing to implement that touches everything.
2) Hey - authentication by itself is useful. We can use it to know whether this group of users can access the data.
3) "As a site administrator I want to make sure that only authorised stuff have access to Foo to prevent valuable information being publicly accessible"
This way you'll always have a working EDI system - it just cover a subset of the functionality. A subset you can grow over time - hopefully in order of the importance of the functionality to your business.
My real preference however would be to dig a bit further in to why the EDI is being done. Generally it won't be because "EDI" is a feature that people want. It'll be because the EDI is necessary for some other bit of functionality in the system.
In which case, rather than having a separate EDI project, I would much rather use whatever the thing-that-needs-EDI is to drive the development of the EDI layer. The stories in (3) above will then be coming from a live project - and you'll be much more likely to build what you need and avoid waste.
Standard way of working on new API (library, class, whatever) usually looks like this:
you think about what methods would API user need
you implement API that you suspect user will need
So basically you trying to guess what your API should look like. It very often leads to over engineering stuff, huge APIs that you think user will need and it is very possible that great part of your code won't be used at all.
Some time ago, maybe few years even, I read some article that promoted writing client code first. I don't remember where I found it but author pointed out several advantages like better understanding how API will be used, what it should provide and what is basically obsolete. I think idea was that it goes along with SCRUM methodology and user stories but on implementation level.
Just out of curiosity for my latest private project I started not with actual API (some kind of toolkit library) but with client code that would use this API. Of course my code is all in red because classes, methods and properties does not exist and I can forget about help from intellisense but what I noticed is that after few days of coding my application "has" all basic functionalities and my library API "is" a lot smaller than I imagined when starting a project.
I don't say that if somebody took my library and started using it it wouldn't lack some features but I think it helped me to realize that my idea of this API was somewhat flawed because I usually try to cover all bases and provide methods "just in case". And sometimes it bites me badly because I made some stupid mistake in basic functions being more focused on code that somebody maybe would need.
So what I would like to ask you do you ever tried this approach when needed to create a new API and did it helped you? Is it some recognized technique that has a name?
So basically you're trying to guess what your API should look like.
And that's the biggest problem with designing anything this way: there should be no (well, minimal) guesswork in software design. Designing an API based on assumptions rather than actual information is dangerous, for several reasons:
It's directly counter to the principle of YAGNI: in order to get anything done, you have to assume what the user is going to need, with no information to back up those assumptions.
When you're done, and you finally get around to using your API, you'll invariably find that it sucks to use (poor user experience), because you weren't thinking about how the library is used (UX), you were thinking about what the library must do (features).
An API, by definition, is an interface for users (i.e., developers). Designing as anything else just makes for a bad design, without fail.
Writing sample code is like designing a GUI before writing the backend: a Good Thing. It forces you to think about user experience and practical effects of design decisions without getting bogged down in useless theorising and assumption.
And contrary to Gabriel's answer, this is not bottom-up design: it's top-down. Rather than design the concrete backend of your library and then force an abstract interface on top of it, you first design the interface and then worry about the implementation.
Generally speaking, the idea of designing the concrete first and abstracting from that afterwards is called bottom-up design. Test Driven Development uses similar principle to what you describe to support better design. Firstly you write a test, which is an use of code you are going to write afterwards. It is important to proceed stepwise, because you have to proove the API is implementable. IMportant part of each part is refactoring - this allows you design more concise API and reuse parts of your code.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I maintain an Open Source Android app. Every once in a while, some anonymous hero localizes to its mother tongue, sending files or using our online tool.
At first I thought the magic of collaboration would be enough to provide timely localizations, but actually, the UI strings change, and each release ships with roughly:
5 languages localized at 100%
8 languages localized at 70% (because recent strings have not been localized)
I am extremely thankful, but is there something I can do to bring the localizations from 70% to 100% when the release comes?
I send messages on the mailing list at code freeze a month before each release, and then a week before, but most of the people who contributed localizations don't read the mailing list, in fact most of them were probably just good samaritans passing by.
Should I stalk the translators and ask them personally?
I have been thinking about having a person responsible for each language. This person (how should I call the role?) would be "responsible" for bringing the translation to 100% before each release. Their names would be listed in the "About" dialog. Is it a good or a bad strategy? Any tips?
It's the 90-9-1 principle - http://www.90-9-1.com/ - and designating people in charge of things isn't going to do it. You can offer cash - rewards/pay - or you can groom them. If you bring money into the mix, remember that it quickly becomes tradeoff analysis. People will compare what you offer versus what they can earn on their own. Since I assume you don't have that much money, you don't want that sort of comparison.
Realistically, grooming them is a better option. You've done the first step - include them - show that their fixes, updates, etc are included and help the product. The next step is publicly thanking them and appreciating them. That will get the first 80% as you've seen. The next step is getting them personally committed. Start interacting with them directly. Send them your thanks.. not just in email. If you have a product t-shirt, send them one with a hand written note. In your release notes, link to their sites. If you ever see them in person, buy their coffee.. whatever. The point is that you go out of your way - however small - to acknowledge that they're going out of their own way.
I'm the Project Lead of the Open Source Project Management System web2project and have been doing this longer than I care to consider. ;)
Use both a carrot and a stick approach. Giving people credit in your "About" screen, and calling them the lead translator for a particular language (if they're willing to accept that responsibility) is a good thing. Don't include a translation until you've gotten someone to commit to being the lead translator for that language; just like with code, you don't want to accept contributions unless (a) you are able and willing to maintain those contributions or (b) someone who you consider reliable has volunteered to be responsible for maintaining it.
The "stick" is that if someone stops maintaining a language, and fails to appropriately pass the responsibility off to someone else, you will remove that language from the next version of your application. Most people who translate your app probably do so because they would prefer to use your app in their native language. The threat of removing the language from the next version, or the wake-up call when they find that the next version doesn't contain their language, might inspire them to come back and finish up the last 30% translation work.
You are getting it wrong, community translation is an ongoing process or lets say never ending approach.
This doesn't meant that is wrong, but you have to live with the fact that localization is always a compromise and that Usually, it is better to have 20 languages with 50% translation than having 10 languages 100% translated.
If one language is important, it will have more users, so it will have more contributors and the translation rate will be greater.
You don't know when and if the translation for a specific language will be 100%, probably never.
The good part is that you shouldn't care about obtaining 100%, you should spend your effort in motivating people to contribute.
Probably you already know, community translation doesn't play well with packed products.
The solution to this problem is to change the way you work and provide partial translations in your release and a simple update system that updates them (preferably a silent one, like the Chromium update).
PS. If you need to assure a 100% translation rate for a limited number of languages before your release, consider paying a translation vendor to do it.
Well, you get what you paid for. The strategy that you are planing to implement will not give you anything. That's because heroes have their own lives. The only way to get this done, is to engage more people to translate, because completion percentage is a function of a community size.
If your application is useful enough, you may try to give a link "help to translate into your language" and this could do for a while.
Think of creating discussion group or board that you could post "Call for translation" when you are ready to ship. Translators usually don't have a time to track changes.
One thing to note is that localization people need time to do the localizations, and the less they're getting paid for it, the longer they need. This means that you should at least try to avoid changing the localization keys (especially including defining new ones) shortly before a release. I know it's nice to not be constrained like that, but the reality is that if you're not paying then you're not going to get a fast turnaround in the majority of cases, so you have to plan for this and mitigate in your schedule. In effect, you have to stop thinking about the text shown to a user as something that can be finalized late in the project and instead start treating it as an important interface matter that you'll plan to lock down much earlier.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
Most people would agree that internationalizing an existing app is more expensive than developing an internationalized app from scratch.
Is that really true? Or when you write an internationalized app from scratch the cost of doing I18N is just being amortized over multiple small assignments and nobody feels on his shoulders the whole weight of the internationalization task?
You can even claim that a mature app has many and many LOC that where deleted during the project's history, and that they don't need to be I18Ned if internationalization is made as an after thought, but would have been I18N if the project was internationalized from the very beggining.
So do you think a project starting today, must be internationalized, or can that decision be deferred to the future based on the success (or not) the software enjoys and the geographic distribution of the demand.
I am not talking about the ability to manipulate unicode data. That you have for free in most mainstream languages, databases and libraries. I am talking specifically of supporting your own software's user interface in multiple languages and locales.
"when you write an internationalized app from scratch the cost of doing I18N is ... amortized"
However, that's not the whole story.
Retroactively tracking down every message to the users is -- in some cases -- impossible.
Not hard. Impossible.
Consider this.
theMessage = "Some initial part" + some_function() + "some following part";
You're going to have a terrible time finding all of these kinds of situations. After all, some_function just returns a String. You don't know if it's a database key (never shown to a person) or a message which must be translated. And when it's translated, grammar rules may reveal that a 3-part string concatenation was a dumb idea.
You can't simply GREP every String-valued function as containing a possible I18N message that must be translated. You have to actually read the code, and possibly rewrite the function.
Clearly, when some_function has any complexity to it at all, you're stumped as to why one part of your application is still in Swedish while the rest was successfully I18N'd into other languages. (Not to pick on Swedes in particular, replace this with any language used for development different from final deployment.)
Worse, of course, if you're working in C or C++, you might have some of this split between pre-processor macros and proper C-language syntax.
And in a dynamic language -- where code can be built on the fly -- you'll be paralyzed by a design in which you can't positively identify all the code. While dynamically generating code is a bad idea, it also makes your retroactive I18N job impossible.
I'm going to have to disagree that it costs more to add it to an existing application than from scratch with a new one.
A lot of the time i18n is not required until the application gets 'big'. When you do get big, you will likely have a bigger development team to devote to i18n so it will be less of a burden.
You may not actually need it. A lot of small teams put great effort to support internationalization when you have no customers who require it.
Once you have internationalized, it makes incremental changes more time consuming. It doesn't take a lot of extra time but every time you need to add a string to the product, you need to add it to the bundle first and then add a reference. No it is not a lot of work but it is effort and does take a bit of time.
I prefer to 'cross that bridge when we come to it' and internationalize only when you have a paying customer looking for it.
Yes, internationalizing an existing app is definitely more expensive than developing the app as internationalized from day one. And it's almost never trivial.
For instance
Message = "Do you want to load the " & fileType() & " file?"
cannot be internationalised without some code alterations because many languages have grammatical rules like gender agreement. You often need a different message string for loading every possible file type, unlike in English when it's possible to bolt together substrings.
There are many other issues like this: you need more UI space because some languages need more characters than English to express the same concept, you need bigger fonts for East Asia, you need to use localised date/times in the user interface but perhaps English US when communicating with databases, you need to use semicolon as a delimeter for CSV files, string comparisons and sorting are cultural, phone numbers & addresses...
So do you think a project starting
today, must be internationalized, or
can that decision be deferred to the
future based on the success (or not)
the software enjoys and the geographic
distribution of the demand?
It depends. How likely is the specific project to be internationalised? How important it is to get a first version fast?
If you truly think you get "unicode handling" "for free", you may have a surprise coming your way when you try.
Unless you use a framework that has proven i18n ability beyond languages with the ANSI or very similar character sets, you will find several niggles and more major issues where the unicode handling isn't quite right, or simply unavailable. Even with relatively common languages (e.g. German) you can run into difficulty with shrinking or expanding letter counts and APIs that don't support unicode.
And then think of languages with different reading-ordering!
This is one of the reasons you should really plan it in from the beginning, and test the stuff to destruction on the set of languages you plan to support.
The concept of i18n and l10n is broader than merely translating strings to and fro some languages.
Example: Consider the input of date and time by users. If you haven't internationalization in mind when you design
a) the interface for the user and
b) the storage, retrieval and display mechanism
you will get a really bad time, when you want to enable other input schemes.
Agreed, in most cases i18n is not necessary in the first place. But, and that is my point, if you don't spend a thought on some areas, that must be touched for i18n, you will find yourself ending up rewriting large portions of the original code. And then, adding i18n is a lot more expensive than having spent some thought beforehand.
One thing that seems like it can be a big issue is the different character counts for a message in various languages. I do some work on iPhone apps and especially on a small screen if you design the UI for a message that has 10 characters and then you try to internationalize later and find you need 20 characters to display the same thing you now have to redo your UI to accommodate. Even with desktop apps this can still be a large PITA.
It depends on your project and how your team is organised.
I've been involved in the internationalization of a website, and it was one developer full-time for a year, probably about 6-8 months part-time for me to handle installation impacts when needed (reorganising files, etc), and other developers getting involved from time to time when their projects needed heavy refactoring. This was in an application that was at v3.
So that's definitely expensive. What you have to ask is how expensive is it to provide a localization system from the start, and how will that impact the project in the early stages. Your project at v1 may not be able to survive delays and setbacks caused by issues with a hastily-designed internationalization framework, while a stable v3 project with a wide customer base may have the capital to invest in doing that properly.
It also depends on whether you want to internationalize everything including log messages, or just the UI strings, and how many of those UI strings there are, and who you have available to do localization and the QA that goes with it, and even what languages you want to support - for example, does your system need to support unicode strings (which is a requirement for Asian languages).
And don;t forget that changing the database backend to support internationalized data can be costly as well. Just try to change that varchar field to nvarchar when you already have 20,000,000 records.
I think it depends on a language. Every j2ee(java web) app is i18n, because its very easy(even IDE can extract strings for you and you just name them).
In j2ee its cheaper to add it later, however the culture is to add them as soon as possible. I think its because j2ee uses a lot of open-source and almost all open-source libs are i18n. its great idea for them, but not for most j2ee app. most enterprise apps are just for one company that speak one language.
Plus if you have bad testers putting it too soon makes them give you bug reports about labels and translations(I only once saw translations done NOT by developers). After testers are done with it you have buggy app with excellent i18n support. However it might be fun for users to switch language and see if they can use it. However using your app its just boring work for them, so they wont even do that. The only users of i18n are the testers.
Weird string joining is not in j2ee culture since you know that one day someone might want to make it i18n. Only problem is extracting labels from html templates.
I cant say what is expensive but, i can tell you that a clean API lets you internationalise your Aplication at very low cost.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
In the course of your software development lifecycle, what essential design artifacts do you produce? What makes them essential to your practice?
The project I'm currently on has been in production for 8+ years. This web application has been actively enhanced and maintained over that time. While we have CMMI based policies and processes in place, with portions of our practice being well defined, the design phase has been largely overlooked. Best practices, anyone?
Having worked on a lot of waterfall projects in the past and a lot of adhoc and agile projects more recently, there's a number of design artifacts I like to create although I can't state enough that it really depends on the details of the project (methodology/team structure/timescale/tools etc).
For a generic, server-based 'enterprise application' I'd want the bare minimum to be something along these lines:
A detailed functional design document (aka spec). Generally something along the lines of Joel s' WhatsTimeIsIt example spec, although probably with some UML use-case diagrams.
A software techical design document. Not necessarily detailed for 100% system coverage but detailed in all the key areas and containing all the design decisions. Being a bit of an UML freak it'd be nice to see lots of pictures along the lines of package diagrams, component diagrams, key feature class diagrams, and probably some sequence diagrams thrown in for good measure.
An infrastructure design document. Probably with UML deployment diagram for the conceptual deisng and perhaps a network diagram for something more physical.
When I say document any of the above might be broken down into multiple documents, or perhaps stored on a wiki/some other tool.
As for their usefulness, my philosophy has always been that a development team should always be able to hand over an application to a support team without having to hand over their phone numbers. If the design artifacts don't clealry indicate what the application does, how it does it, and where it does it then you know the support team are going to give the app the same care and attention they would a rabid dog.
I should mention I'm not vindicating the practice of handing software over from a dev team to a support team once it's finished, which raises all manner of interesting issues, I'm just saying it should be possible if the management so desired.
Working code...and whiteboard drawings.
:P
Designs change so much during development and afterwards that most of my carefully crafted documents rot away in source control and become almost more of a hindrance than a help, once code is in production. I see design documents as necessary to good communication and to clarify your thinking while you develop something, but after that it takes a herculean effort to keep them properly maintained.
I do take pictures of whiteboards and save the JPEGs to source control. Those are some of my best design docs!
In our model (which is fairly specific to business process applications) the design artefacts include:
a domain data model, with comments on each entity and attribute
a properties file listing all the modify and create triggers on each entity, calculated attributes, validators and other business logic
a set of screen definitions (view model)
However do these really count as design artefacts? Our framework is such that these definitions are used to generate the actual code of the system, so maybe they go beyond design.
But the fact that they serve double duty is powerful because they are, by definition, up to date and synchronised with the code at all times.
This is not a design document, per se, but our unit tests serve the dual purpose of "describing" how the code they test is supposed to function. The nice part about this is that they never get out of date, since our unit tests must pass for our build to succeed.
I don't think anything can take the place of a good old fashioned design spec for the following reasons:
It serves as a means of communicating how you will build an application to others.
It lets you get ideas out of your head so you don't worry about tracking a million things at the same time.
If you have to pause a project and return to it later you're not starting your thought process over again.
I like to see various bits of info in a design spec:
General explanation of your approach to the challenge at hand
How will you monitor your application?
What are the security concerns and how are they addressed?
Flowcharts / sequence diagrams
Open issues
Known limitations
Unit tests, while a fantastic and arguably critical item to include in your application development, don't cover all of these topics.