Topojson: quantization VS simplification - json

What is the difference between quantization and simplification?
Is quantization another way of doing simplification?
Is it better to use quantization in certain situations?
Or should i be using a combination of both?

The total size of your geometry is controlled by two factors: the number of points and the number of digits (the precision) of each coordinate.
Say you have a large geometry with 1,000,000 points, where each two-dimensional point is represented as longitude in ±180° and latitude in ±90°:
[-90.07231180399987,29.501753271000098],[-90.06635619599979,29.499494248000133],…
Real numbers can have arbitrary precision (in JSON; in JavaScript they are limited by the precision of IEEE 754) and thus an infinite number of digits. But in practice the above is pretty typical, so say each coordinate has 18 digits. Including extra symbols ([, ] and ,), each point takes at most 1 + 18 + 1 + 18 + 1 = 39 bytes to encode in JSON, and the entire geometry is about 39 * 1,000,000 ≈ 39MB.
Now say we convert these real numbers to integers: both longitude and latitude are reduced to integers x and y where 0 ≤ x ≤ 99 and 0 ≤ y ≤ 99. A simple mapping between real-number points ⟨λ,φ⟩ and integer coordinates ⟨x,y⟩ is:
x = floor((λ + 180) / 360 * 100);
y = floor((φ + 90) / 180 * 100);
Since each coordinate now takes at most 2 digits to encode, each point takes at most 1 + 2 + 1 + 2 + 1 = 7 bytes to encode in JSON, and the entire geometry is about 7MB; we reduced the total size by 82%.
Of course, nothing comes for free: if you remove too much information, you will no longer be able to display the geometry accurately. The rule of thumb is that the size of your grid should be at least twice as big as the largest expected display size for the entire map. For example, if you’re displaying a world map in a 960×500 pixel space, then the default 10,000×10,000 (-q 1e4) is a reasonable choice.
So, quantization removes information by reducing the precision of each coordinate, effectively snapping each point to a regular grid. This reduces the size of the generated TopoJSON file because each coordinate is represented as an integer (such as between 0 and 9,999) with fewer digits.
In contrast, simplification removes information by removing points, applying a heuristic that tries to measure the visual salience of each point and removing the least-noticeable points. There are many different methods of simplification, but the Visvalingam method used by the TopoJSON reference implementation is described in my Line Simplification article so I won’t repeat myself here.
While quantization and simplification address these two different types of information mostly independently, there’s an additional complication: quantization is applied before the topology is constructed, whereas simplification is necessarily applied after to preserve the topology. Since quantization frequently introduces coincident points ([24,62],[24,62],[24,62]…), and coincident points are removed, quantization can also remove points.
The reason that quantization is applied before the topology is constructed is that geometric inputs are often not topologically valid. For example, if you takes a shapefile of Nevada counties and combine it with a shapefile of Nevada’s state border, the coordinates in one shapefile might not exactly match the coordinates in the other shapefile. By quantizing the coordinates before constructing the topology, you snap the coordinates to a regular grid and can get a cleaner topology with fewer arcs, hopefully correctly identifying all shared arcs. (Of course, if you over-quantize, then you can cause too many coincident points and get self-intersecting arcs, which causes other problems.)
In a future release, maybe 1.5.0, TopoJSON will allow you to control the quantization before the topology is constructed independently from the quantization of the output TopoJSON file. Thus, you could use a finer grid (or no grid at all!) to compute the topology, then simplify, then use a coarser grid appropriate for a low-resolution screen display. For now, these are tied together, so I recommend using a finer grid (e.g., -q 1e6) that produces a clean topology, at the expense of a slightly larger file. Since TopoJSON also uses delta-encoded coordinates, you rarely pay the full price for all the digits anyway!

The two are related, but have different purposes and results.
I believe quantization collapses nearby points based on the parameter (which you tune to the expected resolution of the view) - no point in having a resolution higher than the pixels that will be drawing the map. But it doesn't go out of the way to analyze the path to determine the optimal number of points needed to represent the shape.
Simplification is an algorithm that will analyze the polygon and reduce the number of points in an optimal manner such that the overall deformation of the polygon is minimized. Basically, it can be used to dramamatically reduce the number of points (and thus file size) without noticeable impact to the quality of the path.
As a parallel case study, consider a straight line made up of 10 points. Quantization will reduce the number of points (collapsing nearby or coincident points) based on the value you use. Simplification will analyze the line and realize that 8 out of the ten points can be removed without significantly changing the polygon's overall shape, and reduce the line to two points (because there is no deformation of the path by removing points on a line).
See also:
Topojson reference: https://github.com/mbostock/topojson/wiki/Command-Line-Reference
M. Bostock's Simplification article: http://bost.ocks.org/mike/simplify/
Both should be used in combination: quatization to reduce the map to a right sized grid, simplification to optimize the paths.

Related

Negative binomial regression SPSS - Quantity vs Distance

I have quite a simple dataset of quantities of litter found in a national park located on an island. For each data point I have corresponding GPS coordinates, and I've derived the distance of each point to the shore. My aim: observe if the quantities of litter increase or decrease with the distance to shore. I'm assuming that quantities of litter will increase with a decrease in distance, as litter is commonly found on beaches etc.
Quantities of litter are counts, i.e. non-parametric. Additionally I've tested the data to see if it follows a Poisson model and it does not (p-value <0.05), and I have a larger variance than the mean for each variable (quantity and distance) seemingly overdispersed. Therefore, I went on using a negbin regression, with an output as follows:
Omnibus test is highly significant (p=0.000). I was just slightly puzzled on the parameter estimates, and generally hoping that this approach makes sense. Any input much appreciated.
Interpreting the parameter estimates requires knowing the link function specified, which would be a log link if you specified your model as a negative binomial with log link on the Type of Model tab, but could be something else if you specified a custom model using a negative binomial distribution with another link (which could be identity, negative binomial, or power, instead).
If it's a log link, then for a distance of 0 (at the shore), you predict exp(2,636) for the count, or about 13,957. For a given distance from the shore, multiply the distance by -,042 and add that to the 2,636 value, then take the exponential function to the resulting power. So for every unit away from the shore you move, the log of the prediction decreases by ,042, and the prediction is multiplied by about ,959. One unit away, you predict about 13,383 for the count, two units away, about 12,833, etc. So the results are in general accord with your hypothesis. Different specific calculations would be required if you used a different link function.

Why neural network have weight space symmetry?

"Neural nets have a weight space symmetry: we can permute all the hidden units in a given layer and obtain an equivalent solution" (From CSC321, lecture 10, Optimation)
I don't think it make sense, is there something wrong with my understanding?
For example, there is a simple DNN with 2 units in the only hidden layer. And there is one local optima and one global optima like this:
Obviously 2 symmetric points will result in different solution, they will go into different optima(the right-bottom one is the global optima).
Please tell me where it goes wrong?
I think you miss the definition of symmetry.
Geometry is the branch of mathematics studying invariants under some class of transformations. The invariants of a geometry are called the symmetry of the geometry. For instance, the symmetries of Euclidean geometry is length and angles because rotations and translations (the group of Euclidean transformations) preserve them. Simply put, in Euclidean geometry, length and angles are the symmetries of the geometry. In the same vein, the symmetry of the affine geometry is parallelism.
In the context of deep learning, weight space symmetry means that non-identifiable models are invariant to random permutations in their weight layers. This symmetry holds since in deep learning there are generally not enough training samples to rule out all parameter settings but one, there usually exist a large amount of possible weight combinations for a given dataset that yield similar model performance.
Sure, if you permute the weights of input layers randomly - you'll not come with the same result. Becase the order of input elements matter.
The permutations symetry is about permuting the neurons of hidden layers, not about permuting weights of single neuron.
For example, your hidden layer has 2 neorons with weights w11, w12, w13 and w21 w22, w23.
So the permutation principle states that you can easily permute
w11 <-> w21, w12<->w22 and w13<->w23 and the result will remain the same
the weight symmetry here means that there is an equivalent weight that maps the input to output. It doesn't mean the geometrical symmetry in coordinate space. You can have a deeper look in Bishop Ch5.1

understanding getByteTimeDomainData and getByteFrequencyData in web audio

The documentation for both of these methods are both very generic wherever I look. I would like to know what exactly I'm looking at with the returned arrays I'm getting from each method.
For getByteTimeDomainData, what time period is covered with each pass? I believe most oscopes cover a 32 millisecond span for each pass. Is that what is covered here as well? For the actual element values themselves, the range seems to be 0 - 255. Is this equivalent to -1 - +1 volts?
For getByteFrequencyData the frequencies covered is based on the sampling rate, so each index is an actual frequency, but what about the actual element values themselves? Is there a dB range that is equivalent to the values returned in the returned array?
getByteTimeDomainData (and the newer getFloatTimeDomainData) return an array of the size you requested - its frequencyBinCount, which is calculated as half of the requested fftSize. That array is, of course, at the current sampleRate exposed on the AudioContext, so if it's the default 2048 fftSize, frequencyBinCount will be 1024, and if your device is running at 44.1kHz, that will equate to around 23ms of data.
The byte values do range between 0-255, and yes, that maps to -1 to +1, so 128 is zero. (It's not volts, but full-range unitless values.)
If you use getFloatFrequencyData, the values returned are in dB; if you use the Byte version, the values are mapped based on minDecibels/maxDecibels (see the minDecibels/maxDecibels description).
Mozilla 's documentation describes the difference between getFloatTimeDomainData and getFloatFrequencyData, which I summarize below. Mozilla docs reference the Web Audio
experiment ; the voice-change-o-matic. The voice-change-o-matic illustrates the conceptual difference to me (it only works in my Firefox browser; it does not work in my Chrome browser).
TimeDomain/getFloatTimeDomainData
TimeDomain functions are over some span of time.
We often visualize TimeDomain data using oscilloscopes.
In other words:
we visualize TimeDomain data with a line chart,
where the x-axis (aka the "original domain") is time
and the y axis is a measure of a signal (aka the "amplitude").
Change the voice-change-o-matic "visualizer setting" to Sinewave to
see getFloatTimeDomainData(...)
Frequency/getFloatFrequencyData
Frequency functions (GetByteFrequencyData) are at a point in time; i.e. right now; "the current frequency data"
We sometimes see these in mp3 players/ "winamp bargraph style" music players (aka "equalizer" visualizations).
In other words:
we visualize Frequency data with a bar graph
where the x-axis (aka "domain") are frequencies or frequency bands
and the y-axis is the strength of each frequency band
Change the voice-change-o-matic "visualizer setting" to Frequency bars to see getFloatFrequencyData(...)
Fourier Transform (aka Fast Fourier Transform/FFT)
Another way to think about "time domain vs frequency" is shown the diagram below, from Fast Fourier Transform wikipedia
getFloatTimeDomainData gives you the chart on on the top (x-axis is Time)
getFloatFrequencyData gives you the chart on the bottom (x-axis is Frequency)
a Fast Fourier Transform (FFT) converts the Time Domain data into Frequency data, in other words, FFT converts the first chart to the second chart.
cwilso has it backwards.
the time data array is the longer one (fftSize), and the frequency data array is the shorter one (half that, frequencyBinCount).
fftSize of 2048 at the usual sample rate of 44.1kHz means each sample has 1/44100 duration, you have 2048 samples at hand, and thus are covering a duration of 2048/44100 seconds, which 46 milliseconds, not 23 milliseconds. The frequencyBinCount is indeed 1024, but that refers to the frequency domain (as the name suggests), not the time domain, and it the computation 1024/44100, in this context, is about as meaningful as adding your birth date to the fftSize.
A little math illustrating what's happening: Fourier transform is a 'vector space isomorphism', that is, a mapping going bijectively (i.e., reversible) between 2 vector spaces of the same dimension; the 'time domain' and the 'frequency domain.' The vector space dimension we have here (in both cases) is fftSize.
So where does the 'half' come from? The frequency domain coefficients 'count double'. Either because they 'actually are' complex numbers, or because you have the 'sin' and the 'cos' flavor. Or, because you have a 'magnitude' and a 'phase', which you'll understand if you know how complex numbers work. (Those are 3 ways to say the same in a different jargon, so to speak.)
I don't know why the API only gives us half of the relevant numbers when it comes to frequency - I can only guess. And my guess is that those are the 'magnitude' numbers, and the 'phase' numbers are thrown out. The reason that this is my guess is that in applications, magnitude is far more important than phase. Still, I'm quite surprised that the API throws out information, and I'd be glad if some expert who actually knows (and isn't guessing) can confirm that it's indeed the magnitude. Or - even better (I love to learn) - correct me.

Multidimensional interpolation

Given a dataset of samples in a multi dimensional space (in my case a 4D space) where the samples are present on all the corners of the 4D cube and a substantial amount of samples within this cube but not in a neatly grid. Each sample has an output value next to it's 4D coordinate. The cube has coordinates [0,0,0,0]..[1,1,1,1].
Given a new coordinate (4D) how can I come up with the best interpolated value given these samples? Eg how do I choose the samples to start with, how to interpolate.
As a first guess I would guess that this can be done with a two step process:
find the smallest convex pentachoron (4D equivalent of the 3D tetrahedron / the 2D triangle) around the coordinate we need to interpolate.
interpolate within this tetrahedron.
Especially step 1 seems quite complex and slow.
Here's the first approach I'd try.
Step 1
Find the point's 4 nearest neighbors by Euclidean distance. It's important that these 4 points are linearly independent because next they're used to create a Barycentric coordinate system. Those 4 points become the vertices of your pentachoron (aka 4-simplex).
If nearest-neighbor checks are too slow, try structuring your data into a spatial lookup tree that works in 4D.
Step 2
Now we need to associate a value with the interpolation point X. Start by deriving X's representation in this new Barycentric coordinate system. This Barycentric coordinate consists of 4 numbers, which collectively describe the relative distance between the interpolation point and each of the 4-simplex's vertices.
Normalize the Barycentric coordinate so its components sum to 1.
Each of those 4 simplex vertices are data points and have an output value. Combine those 4 output values into a vector.
Finally, interpolate by calculating the dot product of the normalized coordinate with the vector of output values.
Source: This idea is really just a 4D extension of this gem in middle of the Barycentric coordinate system page on Wikipedia.

How to represent stereo audio data for FFT

How should stereo (2 channel) audio data be represented for FFT? Do you
A. Take the average of the two channels and assign it to the real component of a number and leave the imaginary component 0.
B. Assign one channel to the real component and the other channel to the imag component.
Is there a reason to do one or the other? I searched the web but could not find any definite answers on this.
I'm doing some simple spectrum analysis and, not knowing any better, used option A). This gave me an unexpected result, whereas option B) went as expected. Here are some more details:
I have a WAV file of a piano "middle-C". By definition, middle-C is 260Hz, so I would expect the peak frequency to be at 260Hz and smaller peaks at harmonics. I confirmed this by viewing the spectrum via an audio editing software (Sound Forge). But when I took the FFT myself, with option A), the peak was at 520Hz. With option B), the peak was at 260Hz.
Am I missing something? The explanation that I came up with so far is that representing stereo data using a real and imag component implies that the two channels are independent, which, I suppose they're not, and hence the mess-up.
I don't think you're taking the average correctly. :-)
C. Process each channel separately, assigning the amplitude to the real component and leaving the imaginary component as 0.
Option B does not make sense. Option A, which amounts to convert the signal to mono, is OK (if you are interested in a global spectrum).
Your problem (double freq) is surely related to some misunderstanding in the use of your FFT routines.
Once you take the FFT you need to get the Magnitude of the complex frequency spectrum. To get the magnitude you take the absolute of the complex spectrum |X(w)|. If you want to look at the power spectrum you square the magnitude spectrum, |X(w)|^2.
In terms of your frequency shift I think it has to do with you setting the imaginary parts to zero.
If you imagine the complex Frequency spectrum as a series of complex vectors or position vectors in a cartesian space. If you took one discrete frequency bin X(w), there would be one real component representing its direction in the real axis (x -direction), and one imaginary component in the in the imaginary axis (y - direction). There are four important values about this discrete frequency, 1. real value, 2. imaginary value, 3. Magnitude and, 4. phase. If you just take the real value and set imaginary to 0, you are setting Magnitude = real and phase = 0deg or 90deg. You have hence forth modified the resulting spectrum, and applied a bias to every frequency bin. Take a look at the wiki on Magnitude of a vector, also called the Euclidean norm of a vector to brush up on your understanding. Leonbloy was correct, but I hope this was more informative.
Think of the FFT as a way to get information from a single signal. What you are asking is what is the best way to display data from two signals. My answer would be to treat each independently, and display an FFT for each.
If you want a really fast streaming FFT you can read about an algorithm I wrote here: www.depthcharged.us/?p=176