in our company we try to establish the concept of a Story Map (Jeff Patton ) at the moment.
I have the following questions:
- What happens in a story map with cross-cutting issues/tasks? We have activities that run throughout the project. How are these activities in a story map being held?
In a story map, there are areas of responsibility. What happens when an activity is moved from one area of the map to another but the other activities remain unchanged in the time sequence. Are the task areas 2 times in the story map, since the temporal sequence should be kept?
Is the time sequence for each task area or the whole story map?
Headings for task areas such as installation, termination, cancellation and change are okay? These are our business processes that are implemented.
Thanks in advance for answering the questions.
Kai.
Kai, here are two perspectives on story mapping: John Walpole of Twitter shares their experience with story maps and I also wrote a primer on story mapping.
Related
I want a list of locations (coordinates) for all possible colonies/neighborhoods of some Indian cities. Take for example Delhi. Can this data be obtained with the Places API?
The only thing that comes to my mind is to use a query like -
https://maps.googleapis.com/maps/api/place/search/xml?location=28.540346,77.210026&radius=500&types=administrative_area_level_1|administrative_area_level_2|administrative_area_level_3|locality|neighborhood|street_address|sublocality|sublocality_level_4|sublocality_level_5|sublocality_level_3|sublocality_level_2|sublocality_level_1|subpremise&sensor=false&key=MYKEY
and then keep changing the radius by 500 till the whole city is covered.
Is there a better way of doing this?
Given how often you would need to do this for your map, since caching that data goes against the terms of service, this is not a great approach. If you map gets any decent usage, you'll rapidly hit your quota. Plus you're only get center points of the colonies/neighborhoods. I'd recommend trying to find another source of that data you can download. The Places API was not designed with this in mind.
I've a SQLServer 2008 database containing names, addresses and postcodes plus lat/ long. I have a google map with an outline "boundary" polygon drawn on it, and then the data scattered across it.
What I need to do next is to switch that to a "heat map"/ Choropleth. Googling around the implementations I can so don't look too pretty and don't "know about" uk postcode boundaries.
So I'm thinking that I will need to draw my own postcode area polygons, and then colour them with my own heat map colours. I can see how to do that "the hard way". I already have the ability to GROUP BY postcode, so the frequencies are easy; drawing the boundaries is a bit of graft with the mouse. Alternatively I could opt for a square grid, which would require more server compute but less polygon drawing.
The free Code-Point® Open database has what I think are the centroids of the post code areas; irrelevant as I need the borders (I already know which person is in which post code
Finally to the questions:
Does anyone know of a data
source for UK post code boundaries?
Fortunately I only need to code a
single city at a time, so I can draw
them by hand if necessary, but
obviously if they're already coded
somewhere that would save me graft.
I guess I'm looking preferably for
data, but if I can't find that I'll
need a map I can hand-copy.
Arethere any examples of this out
there? I'm happy to re-invent the
wheel, but not if I can steal it
first.
If you are a member of a UK academic institution you can get postcode boundaries from the UKBorders project. The boundaries may be available to non-academics, so have a browse on the Edina [http://edina.ac.uk/ukborders/] site and see...
What level of postcodes are you after? Area, sector, district, or full unit postcodes? http://en.wikipedia.org/wiki/UK_postcode
A friend of mine is in the real estate business and after being showed the art of writing copy for real estate ads, I realized that it is very formulaic. Especially when advertising online as there are predefined fields you fill in.
Naturally, I thought about creating a generator that pretty much automates writing the ads. i don't expect it to generate outstanding or even very good copy, just that it can put together words and sentences like a human would.
I have a skeleton/template that defines an ad and I've also put together a set of phrases and words that can be randomly selected, but I am interested in more general aspects of coding such a generator? Any suggestions, tips or literature that I can read to better understand this little project better?
using metadata about the listing would be one way.
Say for a given house, you have these attributes:
(type: bungalo, sq feet: <= 1400) You could use the phrase "cozy cottage".
bedrooms: obvious, same thing with bathrooms. Assume using the word Large, medium, etc.
garage spots: if > 2 then "Can park many vehicles", etc.
You could go even further with this given the lat/lon for the address, there are web services that you can find the amount of parks nearby, crime in the neighborhood, etc.
Rick
I'd say there are three basic approaches you could take to a problem like this, depending on how flexible you want the system to be and on how much work you want to put into it. The simplest is to treat it as a report generation problem, along the lines of Rick's suggestion. That's probably the way I'd go to produce a first draft of a listing. The results would be pure boilerplate, but each listing could be quickly punched up by the copywriter.
If you wanted to get fancy, though, you could come at it as a natural language generation problem. You'd start with some kind of a knowledge representation describing the meaning of the listing and set of rules (finite state transducers, say) for mapping meanings to linguistic forms. There's a sizable academic literature on that kind of stuff, though it's kind of out of fashion these days. Places to start might be Blackburn & Bos's book or the NLTK suite (especially some of the projects in the contrib package).
The third way of doing it would be to treat it as a translation problem, essentially "translating" database entries into ad copy. You'd start with a large collection of listings and the corresponding human-written ads and construct a statistical model of the relationship between the two. Moses/Giza++ is a general purpose tool for building and applying such models.
I'm interested in building a 3D model of our solar system for web use (probably with AS3 and papervision) and have been looking into how I would go about encoding the planetary positions. My idea was to download the already calculated positions from NASA as calculating the positions myself seems a but overcomplicated. I'm not sure though whether I should use a helio centric or an earth centric encoding.
I wanted to know if there are any one with any experience in this. Which approach would be better? The NASA JPL website seems to have the positions of all the major bodies in our solar system as earth centric. I can see this becoming a problem later on though when adding Voyager and Mars Lander missions to the model?
Any feedback, comments and links are very welcome.
EDIT: I have a rough model running that uses heliocentric coordinates, but I haven't been able to find the coordinates for all planets in this format.
UPDATE:
I don't have a lot of detail to provide for know because I really don't know what I'm doing (from the space point of view). I wanted to get a handle on 3D programming, and am interested in space. The idea was that I would make a rough solar system simulator with at first all the planets and their orbiters (maybe excluding satellites at first). Perhaps include a news aggregator and some links to news/resources and so on. The general idea would be to allow people to click around and get super excited about going to the moon and Mars (for a starter).
In the long run I hopefully would be able to add in satellites and the moon missions (scroll back in time to the 70's and see the moon missions).
So to answer Arrieta's question the idea was not to calculate eclipses but to build an easy to approach, interactive space exploratorium, and learn some 3D and space related stuff on the way.
Glad you want to build your own simulator, but depending on what you want to do it may be far from an easy task. The simplest approach is as follows:
Download the JPL-DE405 ephemerides and the subroutines for retrieving the planetary positions (wrt Solar System Barycenter).
Request for timespan, compute the positions, and display them to the screen in a visually appealing manner
Done
Now, why would you want to do this? If you want to view the planet's orbits, that's it. You are done. If you want to compute geometric events (like eclipses, or line-of-sight, or ilumination) then you are in a whole different ball game. That's astronautics, and it is not simple.
Please be more specific. The distinction you make of "geocentric" or "heliocentric" coordinates really has no major difficulty involved. If you have all the states in heliocentric frame, you can compute the geocentric frame by simple vector subtraction. That's not the problem! The problems are a thousand more, but you need to be specific so we can provide more guidance.
JPL has provided high quality ephemerides for decades now, and we have a full team of brilliant people working on it. It is one of the most difficult things to get right!
Again, provide more details or check out other sources of information.
Please google "Solar System Simulator" (done here, at JPL) and see if it fulfills your needs.
Cheers.
It may be worth you checking out the ASCOM Platform (we also have a stack exchange site called ASCOM Answers).
The ASCOM Platform has several useful libraries for doing this sort of thing.
USNO NOVAS (Naval Observatory Vector Astrometry)
Kepler orbit engine
The USNO/NOVAS stuff was originally written in C and we've wrapped it up in .NET for ease of use from C# and VB.
As an added bonus (actually it's the raison d’être for ASCOM), the Platform makes it easy for you to control things like telescopes, it's used by Microsoft's World Wide Telescope for exactly that purpose. I tmight be a fun extension to your model to be able to point a telescope at things.
I'd probably start (well, I did a while back) with heliocentric coordinates and get a few of the planets up and running. But sooner or later you'll want to write a heliocentric-to-geocentric coordinate conversion routine, and its inverse. For some bodies, such as artificial satellites the geocentric coordinates will be easier to deal with.
You can use the astro-phys api to get a JSON formatted state vector for all the planets. It calculates them using JPL's de406 so it's pretty accurate and uses the solar system barycenter.
Although, if you know where the sun is relative to the earth and you're in a geocentric model, you can subtract the position of the sun from all of the bodies (including earth) to be heliocentric.
I've always been intrigued by Map Routing, but I've never found any good introductory (or even advanced!) level tutorials on it. Does anybody have any pointers, hints, etc?
Update: I'm primarily looking for pointers as to how a map system is implemented (data structures, algorithms, etc).
Take a look at the open street map project to see how this sort of thing is being tackled in a truely free software project using only user supplied and licensed data and have a wiki containing stuff you might find interesting.
A few years back the guys involved where pretty easy going and answered lots of questions I had so I see no reason why they still aren't a nice bunch.
A* is actually far closer to production mapping algorithms. It requires quite a bit less exploration compared to Dijikstra's original algorithm.
By Map Routing, you mean finding the shortest path along a street network?
Dijkstra shortest-path algorithm is the best known. Wikipedia has not a bad intro: http://en.wikipedia.org/wiki/Dijkstra%27s_algorithm
There's a Java applet here where you can see it in action: http://www.dgp.toronto.edu/people/JamesStewart/270/9798s/Laffra/DijkstraApplet.html and Google you lead you to source code in just about any language.
Any real implementation for generating driving routes will include quite a bit of data on the street network that describes the costs associate with traversing links and nodes—road network hierarchy, average speed, intersection priority, traffic signal linking, banned turns etc.
Barry Brumitt, one of the engineers of Google maps route finding feature, wrote a post on the topic that may be of interest:
The road to better path-finding
11/06/2007 03:47:00 PM
Instead of learning APIs to each map service provider ( like Gmaps, Ymaps api) Its good to learn Mapstraction
"Mapstraction is a library that provides a common API for various javascript mapping APIs"
I would suggest you go to the URL and learn a general API. There is good amount of How-Tos too.
I've yet to find a good tutorial on routing but there are lots of code to read:
There are GPL routing applications that use Openstreetmap data, e.g. Gosmore which works on Windows (+ mobile) and Linux. There are a number of interesting [applications using the same data, but gosmore has some cool uses e.g. interface with websites.
The biggest problem with routing is bad data, and you never get good enough data. So if you want to try it keep your test very local so you can control the data better.
From a conceptual point of view, imagine dropping a stone into a pond and watching the ripples. The routes would represent the pond and the stone your starting position.
Of course the algorithm would have to search some proportion of n^2 paths as the distance n increases. You would take you starting position and check all available paths from that point. Then recursively call for the points at the end of those paths and so on.
You can increase performance, by not double-backing on a path, by not re-checking the routes at a point if it has already been covered and by giving up on paths that are taking too long.
An alternative way is to use the ant pheromone approach, where ants crawl randomly from a start point and leave a scent trail, which builds up the more ants cross over a given path. If you send (enough) ants from both the start point and the end points then eventually the path with the strongest scent will be the shortest. This is because the shortest path will have been visited more times in a given time period, given that the ants walk at a uniform pace.
EDIT # Spikie
As a further explanation of how to implement the pond algorithm - potential data structures needed are highlighted:
You'll need to store the map as a network. This is simply a set of nodes and edges between them. A set of nodes constitute a route. An edge joins two nodes (possibly both the same node), and has an associated cost such as distance or time to traverse the edge. An edge can either either be bi-directional or uni-directional. Probably simplest to just have uni-directional ones and double up for two way travel between nodes (i.e. one edge from A to B and a different one for B to A).
By way of example imagine three railway stations arranged in an equilateral triangle pointing upwards. There are also a further three stations each halfway between them. Edges join all adjacent stations together, the final diagram will have an inverted triangle sitting inside the larger triangle.
Label nodes starting from bottom left, going left to right and up, as A,B,C,D,E,F (F at the top).
Assume the edges can be traversed in either direction. Each edge has a cost of 1 km.
Ok, so we wish to route from the bottom left A to the top station F. There are many possible routes, including those that double back on themselves, e.g. ABCEBDEF.
We have a routine say, NextNode, that accepts a node and a cost and calls itself for each node it can travel to.
Clearly if we let this routine run it will eventually discover all routes, including ones that are potentially infinite in length (eg ABABABAB etc). We stop this from happening by checking against the cost. Whenever we visit a node that hasn't been visited before, we put both the cost and the node we came from against that node. If a node has been visited before we check against the existing cost and if we're cheaper then we update the node and carry on (recursing). If we're more expensive, then we skip the node. If all nodes are skipped then we exit the routine.
If we hit our target node then we exit the routine too.
This way all viable routes are checked, but crucially only those with the lowest cost. By the end of the process each node will have the lowest cost for getting to that node, including our target node.
To get the route we work backwards from our target node. Since we stored the node we came from along with the cost, we just hop backwards building up the route. For our example we would end up with something like:
Node A - (Total) Cost 0 - From Node None
Node B - Cost 1 - From Node A
Node C - Cost 2 - From Node B
Node D - Cost 1 - From Node A
Node E - Cost 2 - From Node D / Cost 2 - From Node B (this is an exception as there is equal cost)
Node F - Cost 2 - From Node D
So the shortest route is ADF.
From my experience of working in this field, A* does the job very well. It is (as mentioned above) faster than Dijkstra's algorithm, but is still simple enough for an ordinarily competent programmer to implement and understand.
Building the route network is the hardest part, but that can be broken down into a series of simple steps: get all the roads; sort the points into order; make groups of identical points on different roads into intersections (nodes); add arcs in both directions where nodes connect (or in one direction only for a one-way road).
The A* algorithm itself is well documented on Wikipedia. The key place to optimise is the selection of the best node from the open list, for which you need a high-performance priority queue. If you're using C++ you can use the STL priority_queue adapter.
Customising the algorithm to route over different parts of the network (e.g., pedestrian, car, public transport, etc.) of favour speed, distance or other criteria is quite easy. You do that by writing filters to control which route segments are available, when building the network, and which weight is assigned to each one.
Another thought occurs to me regarding the cost of each traversal, but would increase the time and processing power required to compute.
Example: There are 3 ways I can take (where I live) to go from point A to B, according to the GoogleMaps. Garmin units offer each of these 3 paths in the Quickest route calculation. After traversing each of these routes many times and averaging (obviously there will be errors depending on the time of day, amount of caffeine etc.), I feel the algorithms could take into account the number of bends in the road for high level of accuracy, e.g. straight road of 1 mile will be quicker than a 1 mile road with sharp bends in it.
Not a practical suggestion but certainly one I use to improve the result set of my daily commute.