Is it possible to speed up mapping with OSM by removing features and details (minor roads, bus stops, etc) -- or is that somewhat irrelevant to the tile download and rendering process.
aka, Are the SVG details added/removed on client or server side.
Further how are those 'church: invisible' type of instructions set
TIA
Perhaps that is a general mapping question; given that an engine (possibly) operates in much the same way at least when it comes to tiles and SVG details. And I simply don't know that process.
It is not quite clear about which specific process your are talking.
The tiles you see on http://openstreetmap.org are PNG, not SVG. The share menu allows to export the current view to SVG and some other formats. This resulting SVG is created server-side. Of course using less detail would speed up the SVG creation process But this process involves several other operations like querying the data base which won't benefit much from reduced details (ignoring the time to transfer the data between the database and the SVG creation process).
The PNG rendering will also depend somehow on the amount of detail but likewise there are a lot more operations necessary for rendering a single tile. I don't expect a large speedup by removing a few features.
Also note that there are several different renderers available and each will behave differently. And there is also the possibility of creating vector tiles which move some of the tile creation load from the server-side to the client-side. Here the amount of detail will slightly influence the server-side and significantly more the client-side, especially low-end systems.
Still I have no idea what these things have to do with mapping - the process of editing maps and adding/updating information.
Related
HTTP/2 makes it possible to multiplex connections, eliminating the need for more than one connection to a server. Over a single connection, many individual images can be sent down to the client. This obviates the old image sprite pattern of combining many images into one and using CSS to cut it apart.
I'm curious if sprites would still actually be faster in an HTTP/2 world. If so, under what circumstances?
Sprites, as you will know, are used to prevent multiple requests being queued, so with one payload you could get all the sprites for the site.
But with sprites you tend to get lots of additional icons that are used throughout the website that aren't all needed on any single page.
So with http/2 multiplexing, queuing resources is no longer an issue. You get the speed benefit when you only download the files needed for each page.
However you may get better compression by combining some images into a single file, making the overall size of file transfers smaller.
Speed tests run by Benoît Béraud and Alexandre Masselot have given an example of a sprite sheet loading faster than individual sprites. They concluded that sprite sets can still be used to optimise site performance when using http/2 http://blog.octo.com/en/http2-arrives-but-sprite-sets-aint-no-dead/
Extended write up about http/2 by Rachel Andrew can be found here:
https://www.smashingmagazine.com/2016/02/getting-ready-for-http2/
With HTTP/2 multiplexing, the server will be reading a lot of small files instead of reading a single big file. If the server is resource-limited (some Internet-of-Things contraption, for example), then you might be able to find a situation where it's better to have it making the single, big read instead of a lot of small ones, since each read causes the server OS to potentially do a lot of file-access-related operations.
On the client side, the browser will be managing a lot of small files, instead of a big one. I can imagine the code path used for the current sprite workflow being well massaged and optimized, since it's so commonly used. So it might happen that the new case of having a lot of small files could be slower, at least for a time.
I am going to be working on self-chosen project for my college networking class and I just had a couple questions to help get me started in the right direction.
My project will involve creating a new "physical" link over which data, in the form of text, will be transmitted from one computer to another. This link will involve one computer with a webcam that reads a series of flashing colors (black/white) as binary and converts it to text. Each series of flashes will simulate a packet of data. I will be using OSX an the integrated webcam in a Macbook, the flashing computer will either be windows or osx.
So my questions are: which programming languages or API's would be best for reading live webcam data and analyzing the color of a certain area as well as programming and timing the flashes? Also, would I need to worry about matching the flash rate of the "writing" computer and the frame capture rate of the "reading" computer?
Thank you for any help you might be able to provide.
Regarding the frame capture rate, Shannon sampling theorem says that "perfect reconstruction of a signal is possible when the sampling frequency is greater than twice the maximum frequency of the signal being sampled". In other words if your flashing light switches 10 times per second, you need a camera of more than 20fps to properly capture that. So basically check your camera specs, divide by 2, lower the resulting a little and you have your maximum flashing rate.
Whatever can get the frames will work. If the light conditions in which the camera works are gonna be stable, and the position of the light on images is gonna be static then it is gonna be very very easy with checking the average pixel values of a certain area.
If you need additional image processing you should probably also find out about OpenCV (it has bindings to every programming language).
To answer your question about language choice, I would recommend java. The Java Media Framework is great and easy to use. I have used it for capturing video from webcams in the past. Be warned, however, that everyone you ask will recommend a different language - everyone has their preferences!
What are you using as the flashing device? What kind of distance are you trying to achieve? Something worth thinking about is how are you going to get the receiver to recognise where within the captured image to look for the flashes. Some kind of fiducial marker might be necessary. Longer ranges will make this problem harder to resolve.
If you're thinking about shorter ranges, have you considered using a two-dimensional transmitter? (given that you're using a two-dimensional receiver, it makes sense) and maybe have a transmitter that shows a sequence of QR codes (or similar encodings) on a monitor?
You will have to consider some kind of error-correction encoding, such as a hamming code. While encoding would increase the data footprint, it might give you overall better bandwidth given that you can crank up the speed much higher without having to worry about the odd corrupt bit.
Some 'evaluation' type material might include you discussing the obvious security risks in using such a channel - anyone with line of sight to the transmitter can eavesdrop! You could suggest in your writeup using some kind of encryption, a block cipher in CBC would do, but would require a key-exchange prior to transmission, so you could think about public key encryption.
I am looking into plotting a very large data. I've tried with FLOT, FLOTR and PROTOVIS (and other JS based packages) but there is one constant problem I'm faced with. I've tested 1600, 3000, 5000, 8000 and 10k points on a 1000w 500h graph which are rendered all within a reasonable time on PC browsers (IE and FF). But when rendered on MACs FF/Safari, starting with 500 data points, the page becomes significantly slow and/or crashes.
Has anyone come across this issue?
Yes, don't do that. It seems pretty unlikely to me that 10k points are actually going to be visible/useful to the user all at once.
You should aggregate your data (server-side) and then if they want to zoom in on areas of the data, use AJAX requests to get that area and replot.
If you use flot, they have examples showing selection, i.e. here: http://people.iola.dk/olau/flot/examples/zooming.html
(I can't comment the Ryley answer yet, that's why I put some remarks here)
What about an offline use. Html is a great format for documents, set aside the server/client stuff.
JavaScript, Canvas and all those fancy client-side technologies could be used to build nice interactive files, like data reports containing graphs with zoom and pan features ...
I read some website development materials on the Web and every time a person is asking for the organization of a website's js, css, html and php files, people suggest single js for the whole website. And the argument is the speed.
I clearly understand the fewer request there is, the faster the page is responded. But I never understand the single js argument. Suppose you have 10 webpages and each webpage needs a js function to manipulate the dom objects on it. Putting 10 functions in a single js and let that js execute on every single webpage, 9 out of 10 functions are doing useless work. There is CPU time wasting on searching for non-existing dom objects.
I know that CPU time on individual client machine is very trivial comparing to bandwidth on single server machine. I am not saying that you should have many js files on a single webpage. But I don't see anything go wrong if every webpage refers to 1 to 3 js files and those js files are cached in client machine. There are many good ways to do caching. For example, you can use expire date or you can include version number in your js file name. Comparing to mess the functionality in a big js file for all needs of many webpages of a website, I far more prefer split js code into smaller files.
Any criticism/agreement on my argument? Am I wrong? Thank you for your suggestion.
A function does 0 work unless called. So 9 empty functions are 0 work, just a little exact space.
A client only has to make 1 request to download 1 big JS file, then it is cached on every other page load. Less work than making a small request on every single page.
I'll give you the answer I always give: it depends.
Combining everything into one file has many great benefits, including:
less network traffic - you might be retrieving one file, but you're sending/receiving multiple packets and each transaction has a series of SYN, SYN-ACK, and ACK messages sent across TCP. A large majority of the transfer time is establishing the session and there is a lot of overhead in the packet headers.
one location/manageability - although you may only have a few files, it's easy for functions (and class objects) to grow between versions. When you do the multiple file approach sometimes functions from one file call functions/objects from another file (ex. ajax in one file, then arithmetic functions in another - your arithmetic functions might grow to need to call the ajax and have a certain variable type returned). What ends up happening is that your set of files needs to be seen as one version, rather than each file being it's own version. Things get hairy down the road if you don't have good management in place and it's easy to fall out of line with Javascript files, which are always changing. Having one file makes it easy to manage the version between each of your pages across your (1 to many) websites.
Other topics to consider:
dormant code - you might think that the uncalled functions are potentially reducing performance by taking up space in memory and you'd be right, however this performance is so so so so minuscule, that it doesn't matter. Functions are indexed in memory and while the index table may increase, it's super trivial when dealing with small projects, especially given the hardware today.
memory leaks - this is probably the largest reason why you wouldn't want to combine all the code, however this is such a small issue given the amount of memory in systems today and the better garbage collection browsers have. Also, this is something that you, as a programmer, have the ability to control. Quality code leads to less problems like this.
Why it depends?
While it's easy to say throw all your code into one file, that would be wrong. It depends on how large your code is, how many functions, who maintains it, etc. Surely you wouldn't pack your locally written functions into the JQuery package and you may have different programmers that maintain different blocks of code - it depends on your setup.
It also depends on size. Some programmers embed the encoded images as ASCII in their files to reduce the number of files sent. These can bloat files. Surely you don't want to package everything into 1 50MB file. Especially if there are core functions that are needed for the page to load.
So to bring my response to a close, we'd need more information about your setup because it depends. Surely 3 files is acceptable regardless of size, combining where you would see fit. It probably wouldn't really hurt network traffic, but 50 files is more unreasonable. I use the hand rule (no more than 5), but surely you'll see a benefit combining those 5 1KB files into 1 5KB file.
Two reasons that I can think of:
Less network latency. Each .js requires another request/response to the server it's downloaded from.
More bytes on the wire and more memory. If it's a single file you can strip out unnecessary characters and minify the whole thing.
The Javascript should be designed so that the extra functions don't execute at all unless they're needed.
For example, you can define a set of functions in your script but only call them in (very short) inline <script> blocks in the pages themselves.
My line of thought is that you have less requests. When you make request in the header of the page it stalls the output of the rest of the page. The user agent cannot render the rest of the page until the javascript files have been obtained. Also javascript files download sycronously, they queue up instead of pull at once (at least that is the theory).
It's no secret that application logs can go well beyond the limits of naive log viewers, and the desired viewer functionality (say, filtering the log based on a condition, or highlighting particular message types, or splitting it into sublogs based on a field value, or merging several logs based on a time axis, or bookmarking etc.) is beyond the abilities of large-file text viewers.
I wonder:
Whether decent specialized applications exist (I haven't found any)
What functionality might one expect from such an application? (I'm asking because my student is writing such an application, and the functionality above has already been implemented to a certain extent of usability)
I've been using Log Expert lately.
alt text http://www.log-expert.de/images/stories/logexpertshowcard.gif
I can take a while to load large files, but it will in fact load them. I couldn't find the file size limit (if there is any) on the site, but just to test it, I loaded a 300mb log file, so it can at least go that high.
Windows Commander has a built-in program called Lister which works very quickly for any file size. I've used it with GBs worth of log files without a problem.
http://www.ghisler.com/lister/
A slightly more powerful tool I sometimes use it Universal Viewer from http://www.uvviewsoft.com/.