I am trying to use the following formula and a key I obtained from mapquest (for the free service) to initiate their location services and calculate the distances between one variable location and 20 of my plant locations.
=importXML("http://mapquestapi.com/directions/v2/route?key=*****************&outFormat=xml&from=" & $B$3 & "&to=" & 56244,"//response/route/distance")
This is working flawlessly expect after using it for a short period of time I have received an email stating I have used 80% of my allotment (15000 transactions) for the month.
The variable location has only been changed around 20-25 times this month so I don't see how I could have used that many transactions. Can someone explain what exactly this formula is doing and how I could make it more efficient if possible? I feel like it has to be using transaction that are unnecessary. Keep in mind I do not need the actual directions all I need is the driving mileage required.
Thanks in advance.
Related
In my Anylogic model I have a population of agents (4 terminals) were trucks arrive at, are being served and depart from. The terminals have two parameters (numberOfGates and servicetime) which influence the departures per hour of trucks leaving the terminals. Now I want to tune these two parameters, so that the amount of departures per hour is closest to reality (I know the actual departures per hour). I already have two datasets within each terminal agent, one with de amount of departures per hour that I simulate, and one with the observedDepartures from the data.
I already compare these two datasets in plots for every terminal:
Now I want to create an optimization experiment to tune the numberOfGates and servicetime of the terminals so that the departure dataset is the closest to the observedDepartures dataset. Does anyone know how to do create a(n) (objective) function for this optimization experiment the easiest way?
When I add a variable diff that is updated every hour by abs( departures - observedDepartures) and put root.diff in the optimization experiment, it gives me the eq(null) is not allowed. Use isNull() instead error, in a line that reads the database for the observedDepartures (see last picture), but it works when I run the simulation normally, it only gives this error when running the optimization experiment (I don't know why).
You can use the absolute value of the sum of the differences for each replication. That is, create a variable that logs the | difference | for each hour, call it diff. Then in the optimization experiment, minimize the value of the sum of that variable. In fact this is close to a typical regression model's objectives. There they use a more complex objective function, by minimizing the sum of the square of the differences.
A Calibration experiment already does (in a more mathematically correct way) what you are trying to do, using the in-built difference function to calculate the 'area between two curves' (which is what the optimisation is trying to minimise). You don't need to calculate differences or anything yourself. (There are two variants of the function to compare either two Data Sets (your case) or a Data Set and a Table Function (useful if your empirical data is not at the same time points as your synthetic simulated data).)
In your case it (the objective function) will need to be a sum of the differences between the empirical and simulated datasets for the 4 terminals (or possibly a weighted sum if the fit for some terminals is considered more important than for others).
So your objective is something like
difference(root.terminals(0).departures, root.terminals(0).observedDepartures)
+ difference(root.terminals(1).departures, root.terminals(1).observedDepartures)
+ difference(root.terminals(2).departures, root.terminals(2).observedDepartures)
+ difference(root.terminals(3).departures, root.terminals(2).observedDepartures)
(It would be better to calculate this for an arbitrary population of terminals in a function but this is the 'raw shape' of the code.)
A Calibration experiment is actually just a wizard which creates an Optimization experiment set up in a particular way (with a UI and all settings/code already created for you), so you can just use that objective in your existing Optimization experiment (but it won't have a built-in useful UI like a Calibration experiment). This also means you can still set this up in the Personal Learning Edition too (which doesn't have the Calibration experiment).
Intro
I'm using a slightly modified GTFS database.
I have a first step algorithm that given two geographical locations provides:
the list of stops around departure and arrival
the list of routes that connects those list of stops
The second step algorithm finds the best journeys matching those stops and routes.
This is working well on direct journeys as well as journeys using one connection.
My problem arises when trying to find the best journey using 2 connections (so there are 3 trips to be searched).
Database
The GTFS format has the following tables (each table has a foreign key to the previous/next table in this list):
stops: stop information (geolocation, name, etc)
stop_times: timetable
trips: itinerary taken by a vehicle (bus, metro, etc)
routes: family of trips that roughly take the same path (e.g. standard and express trips on the same route, but different stops taken)
I have added the following tables
stop_connections: stop to stop connections (around 1 to 20)
stops_routes: lists the available routes at every stop
Here's the table row count in a city where I get slow results (Paris, France):
stops: 28k
stop_times: 12M
trips: 513k
routes: 1k
stop_connections: 365k
stops_routes: 227k
Algorithm
The first step of my algo takes two latitude/longitude points as input, and provides:
the list of stops at each location
the routes that can be used to connect those stops (with up to two connections)
The second step takes each start stop, and analyses the available journeys that use only the routes selected by the first step.
This is the part that I'm trying to optimize. Here's how I'm querying the database:
My search terms (green in the picture):
one departure stop
several arrival stops (1 to 20)
allowed routes at departure, at first connection and on last trip
service ID (not relevant here, can be ignored)
Here's what I do now:
Start from a stop => get timetable => get trips => get routes; filter on allowed routes.
Connect the arrival stops of the first trip to a list of possible stops using stop_connections
Repeat from step 1 two times so that I have 3 trips/2 connections
The problem
This is working fine on some cases, but it can be very slow in others. Usually as soon as I join the timetable or the stop connections, there is a 10x increase of the returned rows. Since I'm joining these table 8 times, there are potentially 10^8 rows to be searched by the engine.
Now I'm sure that I can get this to be more efficient.
My problem is that the number of rows increases at every join, and the arrival stop selection is made at the very end.
I mean I get all the possible journeys from a given stop at a given departure time (there can be millions of combinations), and only when my search reaches the last trip, I can filter on the ~20 allowed arrival stops.
It could be much faster if I could somehow 'know' soon enough that a route isn't worth searching.
Optimizations
Here's what I tried/thought of:
1. Inner join stops_routes when joining stop_connections
Only select stops at a connection that lead to the allowed routes at next trip.
This is sometimes efficient when there is a lot of connections and not all the connected stops are interesting (some connected stop might only be used by a route we don't want to take).
However this inner join can increase the number of rows if there are not many connected stops and a lot of allowed routes.
2. Partition the stop_times table
Create a smaller copy of the stop_times that contains only the timetable of the next two hours or so. Indeed, having the database engine search for the timetable (up to 10pm for example) when my trips starts at 8am is useless. Keeping only 8am-10am is enough and much faster.
This is very efficient, because it dramatically decreases the number of rows to be searched.
I have implemented this with success, it decreased the search time by a factor of about 10x or even 100x.
3. Identify 'good' and 'bad' routes
There is usually, in a metropolitan area, large routes that are very useful when travelling large distances. But these routes aren't the best option when travelling small distances. A human person who knows his own city's public transportation system will quickly tell that from this neighborhood to this other, the best option is to take a specific route.
However this is very difficult to do, and requires a customization on every city.
I plan to make this algo completely independant of the city, so I'm not really willing to go down that road
4. Use crowdsourcing to identify paths that work well
The first search is slow, but the information taken from it can be used to serve fast result to the next person with a similar journey.
However there are so many combinations of departure and arrival stops that the information taken from one query might not be very useful.
I don't know if this is a good idea. I haven't implemented it.
Next
I'm running out of ideas. I know this is not a programming question, but rather a request of ideas on an algorithm. I hope this falls into the SO scope.
Having it on a network makes things a little bit interesting, but fundamentally, you're doing pathfinding, which is a slow process. You're running into the exponential nature of the problem, and doing so with only 3 connections.
I have a couple suggestions that you can perhaps use while doing this with mysql, and a couple that are likely not implementable within it.
Rather than partitioning the timetable, only take the next time for any given route. If you're leaving at 8 AM, you're correct, only looking at routes from 8-10 is better than looking at them all. However, if there's a route from A-B that leaves at 8:20, 8:40, 9:00, 9:15, 9:25, 9:45... there is zero reason to take them all: just take the first arrival time for any given route, since it's strictly better than the rest.
I presume you are pruning any routes that return to an already-visited location? If not, you perhaps should be: they're not useful for you. This may be somewhat difficult to do within the SQL framework.
Depending on its coverage, you could perhaps find a path using the (much smaller) routes table, and then find the best implementation of the top working paths from the trips table.
This is likely impossible within the framework of SQL, but the thing that makes most decent pathfinding algorithms fast is that they use a heuristic to search. Your search goes down every possible route -- it would be a lot faster to first look down the route that leads in the right direction. If it doesn't pan out, less likely directions are picked. The key here is that as soon as you have a result, you return it -- you effectively pruned every route you didn't yet search by the time you returned an answer.
Pre-calculated preferred routes: you suggest this would require human intervention, but I counter that you could do it computationally. Spend the time properly searching for routes from various points to various other points, and check on the statistics of how the routes worked. I would expect that you will find things allowing you to make a "anywhere over here to anywhere over there is going to use this intermediate path" table -- your problem is reduced from "find a path from A to B" to "find a path from A to C, followed by a path from D to B". Doing this will have the potential of causing you to find sub-optimal routes (as you are making an assumption from the precalculated statistics), but it may let you find that sub-optimal route much faster. On a mesh layout it will not work at all well; on a hub layout it will work excellently.
Thanks to zebediah49, I have implemented the following algorithm:
0. Lookup tables
First, I have created an ID on the trips table, that uniquely identifies it. It is based on the list of stops taken in sequence. So this ID guarantees that two trips with the same ID will take exactly the same route.
I called this ID trip_type.
I have improved my stop_connections table so that it includes a cost. This is used to select the best connection when two 'from' stops are connected to the same 'to' stop.
1. Get trips running from the departure stop(s)
Limit those trips to only 1 per trip type (group by trip_type)
2. Get arrival stops from these trips
Select only the best trip if there are two trips reaching the same stop
3. Get connected stops from these arrival stops
Select only the best connection if there are >1 stops that are connected to the same stop
4. Repeat from step 1
I have splitted this into several subqueries and temporary tables, because I can easily group and filter the best stops/trips at each step. This ensures that the minimum searches are sent to the SQL server.
I have stored this algorithm into an SQL procedure, that will do this in a single SQL statement:
call Get2CJourneys(dt, sd, sa, r1, r2, r3)
Where:
dt: departure time
sd: stops at departure point
sa: stops at arrival point
r1, r2, r3: allowed routes for the 1st, 2nd and 3rd trips
The procedure call returns interesting results in <600ms where my previous algorithm returned the same results in several minutes.
Expanding on #zebedia49's fourth point, you can precompute the vector traveled by a route, e.g. a route going due north has a vector of 0, due west = 90, due south = 180, due east = 270. Only return routes whose vectors are within, say, +/- 15 modulo 360 degrees from the as-the-crow-flies route (or +/- 30 if the +/- 15 query doesn't return any hits).
My Company currently runs a listing service of family activities. In our CMS we have two types of entities Branches (The shops we list) and Events (Special Offers, Occasions etc).
Typically when listing an event we would say which Branches it is for and create a relationship, we would search the near by shops for events. Grab them and sort them by distance.
Now our clients want to be able to list a one off event that hasn't got a branch associated with it (For example they host a Festival at a near by garden center rather than one of their shops), I can easily make it I can sort these by distance as well.
But what I was wondering is how could combine the both, so one of our apps could go to our API, "Dude, where are 10 events near to whee I am right now ? " and the api would pull up a list of the 10 closest events.
It should be able to handle Events that are using the location of Branches as well as having its own unique location.
Or do you think I should just store location as its own entity or have hidden branches, places we can set up as being where the event is happening but don't actually show up as being a branch in the app :)
If you have lat / long positions for your events and your branches you can apply the Haversine Formula to compute approximate distances, then order by ascending distance.
MySQL can do this, if you're willing to use a hairy query. This note from the Google Maps team gives the query. You don't have to use Google Maps to do this; you just need lat/long information for each place involved.
https://developers.google.com/maps/articles/phpsqlsearch_v3
Edit It's true that this is very slow if you compute the distance between many pairs of places. The trick to making this kind of operation fast is using a bounding box (spherectangular) distance limit, and putting indexes on your latitude and longitude.
Look at this: Geolocation distance SQL from a cities table
MYSQL has support for "spacial databases" as the spacial extension This will allow you to use "spacial" datatypes in your columns, as well as build index on them, and perform various "spacial analysis" such as polygon intersection.
Not sure this is what you need, but that may worth investigations.
We are trying to get the last N changes for a user, and currently do so by getting the largestChangeId, then subtracting a constant from that and getting more changes.
As an example, we typically are making API calls with the changestamp = largestChangeId - 300, with maxResults set to 300.
We've seen as few as half a dozen changes to 180 changes come back across our userbase with these parameters.
One issue that we're running into is that the number of changes that we get back are rather unpredictable, with huge jumps in change stamps for some users and so we've have to choose between two rather unpalatable scenarios to get the last N changes.
Request lots of changes, which can lead to slow API calls simply because there are lots of changes.
Requests a small set of changes, and seek back progressively in smaller batches, which is also slow as it results in multiple RPC calls, due to multiple API calls.
Our goal is to get the last ~30 or so changes for a user as fast as possible.
As a workaround, we are currently maintaining per user state in our application to tune the max number of changes we request up or down based on the results we got for a user the last time around. However, this is somewhat fragile due to how the rate of changes incrementing for users can vary over time.
So my question is as follows:
Is there a way to efficiently get the last N changes a user, specifically in one API call ?
ID generation is very complex, it's impossible to calculate the ID of the user's nth latest change :) Changes list actually has no features that'd be appropriate for your use case. In my own personal opinion, changes list should be in the reverse chronological order, going to discuss it with the rest of the team.
I'm looking for the best way to implement the following.
I have a MySQL query that analyses performance of a company, and outputs sales, revenue, costs and so on, and basically outputs a final Gross Profit and Net Profit figure. It's working well, and managers can select any date range to run it for, and it will output everything for that range - therefore they can in theory see how the company performed today, yesterday, this week, last month or on the 17th of three months ago... you get the point.
There comes a problem however when some of the figures to be used for the report are variable, and involve fluctuating external costs, such as overheads and so on. I allow users to specify these costs and overheads in a settings table, and the performance query uses these to calculate it's figures. But these variable figures represent now, so would bear no relevance if you wanted to look at the company performance from X months/years in the past, when the overheads for today are being offset against them, creating inaccuracy.
I thought of a couple of solutions.
I could allow the managers to set a date range to apply the overheads for. For example, for June 2011, the daily overhead was £2000, whereas in July 2011 the overhead is £2250.
Or I could save the performance report/query to another table, which would obviously have the variable figure locked in from the time it ran. This could even be automated with a crontab, and perhaps just ran every night.
Which way would you recommend?
If I were you I would go with the first option (1), and create a table to store the delayed overheads for a specific date. This would be much more flexible for you to run any kind of query at any point on time against the pure/"virgin" data you have on the table.
On the other hand the second option doesn't seem that feasible to me, because you can't possibly calculate all the complexity of the queries and reports needed, in different date ranges.