Group codes 210 220 230 not in DXF file - dxf

I have a simple question, I'm saving a DXF file as R12, but I can't find the group codes 210 220 and 230 for arcs. This is a piece of the DXF file:
0
ARC
5
44
8
0
6
CONTINUOUS
62
7
10
0.0
20
0.0
30
0.0
40
180.0
50
0.0
51
180.0
0
ARC
5
Do I need to save this dxf file as an other version? I need this information for specifying the rotation of the arc... (CW or CCW). Thanks for the help!

The help documentation provided by AutoDesk indicate that the 210 / 220 / 230 values are for the extrusion and are optional.
The help documentation also states:
Arcs are drawn in a counterclockwise direction by default. Hold down the Ctrl key as you drag to draw in a clockwise direction.
The rotation of the arc is attributes 50 and 51 (expressed in radians). Unless, you are referring to the 3D rotation (relative to another view). In which case those attributes are required. But if you are in the World Coordinate System (WCS) when you create your arcs the extrusions are not required.

Related

Neural network with sigmoid neurons does not learn if a factor is added to all weights and biases after initialization

I'm about to experiment with a neural network for handwriting recognition, which can be found here:
https://github.com/mnielsen/neural-networks-and-deep-learning/blob/master/src/network.py
If the weights and biases are randomly initialized, it recognizes over 80% of the digits after a few epochs. If I add a small factor of 0.27 to all weights and biases after initialization, learning is much slower, but eventually it reaches the same accuracy of over 80%:
self.biases = [np.random.randn(y, 1)+0.27 for y in sizes[1:]]
self.weights = [np.random.randn(y, x)+0.27 for x, y in zip(sizes[:-1], sizes[1:])]
Epoch 0 : 205 / 2000
Epoch 1 : 205 / 2000
Epoch 2 : 205 / 2000
Epoch 3 : 219 / 2000
Epoch 4 : 217 / 2000
...
Epoch 95 : 1699 / 2000
Epoch 96 : 1706 / 2000
Epoch 97 : 1711 / 2000
Epoch 98 : 1708 / 2000
Epoch 99 : 1730 / 2000
If I add a small factor of 0.28 to all weights and biases after initialization, the network isn't learning at all anymore.
self.biases = [np.random.randn(y, 1)+0.28 for y in sizes[1:]]
self.weights = [np.random.randn(y, x)+0.28 for x, y in zip(sizes[:-1], sizes[1:])]
Epoch 0 : 207 / 2000
Epoch 1 : 209 / 2000
Epoch 2 : 209 / 2000
Epoch 3 : 209 / 2000
Epoch 4 : 209 / 2000
...
Epoch 145 : 234 / 2000
Epoch 146 : 234 / 2000
Epoch 147 : 429 / 2000
Epoch 148 : 234 / 2000
Epoch 149 : 234 / 2000
I think this has to to with the sigmoid function which gets very flat when close to one and zero. But what happens at this point when the mean of the weights and biases is 0.28? Why is there such a steep drop in the number of recognized digits? And why are there outliers like the 429 above?
Initialization plays a big role in training networks. A good initialization can make training and convergence a lot faster, while a bad one can make it many times slower. It can even allow or prevent convergence at all.
You might want to read this fr some more information on the topic
https://towardsdatascience.com/weight-initialization-in-neural-networks-a-journey-from-the-basics-to-kaiming-954fb9b47c79
By adding 0.27 to all weights and biases you probably shift the network away from the optimal solution and increase the gradients. Depending on the layer count this can lead to exploding gradients. Now you have very big updates of weights every iteration. What could be happening is that you have some weight that is 0.3 (after adding 0.27 to it) and we say the optimal value would be 0.1. Now you get a update with -0.4, now you are at -0.1. The next update might be 0.4 (or something close) and you are back at the original problem. So instead of going slow towards the optimal value, the optimizations just overshoots everything and bounces back and forth. This might be fixed after some time or can lead to no convergence at all since the network just bounces around.
Also in general you want biases to be initialized to 0 or very close to zero. If you try this further you might want to try not adding 0.27 to biases and setting them to 0 or something close to 0 initially. Maybe by doing this it can actually learn again.

Tesseract Training - new font with only digits

Hello i try to train tesseract for a new font based on the following digits:
all digits are provided in a png file with transparent background. If i create a box file from it, train it and so on - all works fine!
Now the problem, same situation but i want to train tesseract based on the following image:
as you can see the digits are exactly the same as well as the positions and so on. The only difference from image 1 is that i used a yellow background and from now on nothing is working anymore. I create a box file i set the same positions as for the first image:
0 5 4 20 22 0
1 27 4 38 21 0
2 48 4 60 22 0
3 71 3 83 22 0
4 94 5 109 22 0
5 119 5 131 22 0
6 143 5 157 22 0
7 172 5 184 22 0
8 197 5 211 23 0
9 224 5 238 22 0
well and then i trained the box, but the resulting .tr file is completely empty i didn't stop here and completed all other steps. The resulting font is not possible to use!
So my question is how to train tesseract to recognize this digits no matter which background is used for them?
Edit 2016-04-16:
I used ImageMagick to preprocess the images and i found a command which works very well for all kind of backgrounds. So i wanted to train tesseract for this created images, but it doesn't work as i thought it would... .
First of all i created box files, where most of them were empty. Well i used a website to organize the character positions and i spent a lot of time to make the cropping perfectly! Afterwards i created the resulting .tr files and did also the other stuff to train tesseract.
Finally i got the "traineddata", i moved the file to the "tessdata" directory of tesseract and used it like it should be used:
tesseract example.jpg output -l mg
(i called the new font "mg")
Okay whatever it doesn't recognize all or most of them! I opened this thread to find help, till now nobody really has a clue how to do this, sadly... . Please help me out.
The whole tesseract training files, which i used and created, u can find here:
Tesseract training directory (as no zip/not compressed -> view of all files of the directory)
You can change any color image to binary image and then use tesseract on it, that way no matter what color you are using you will always have same result.

How "wide" or "far" is a google coordinate?

If I have the following coordinates obtained from the Google API:
[longitude] => 18.12288
[latitude] => -23.1233399
I want to know how accurate this coordinate is. In other words, what area does this specific coordinate cover? Is it a 1 meter by 1 meter area, or is it less accurate and maybe cover a 50 by 50 meter area? How do you calculate the area it covers?
UPDATE
Using this calculator, I could get:
0.000001 = .1 meter
0.00001 = 1 meter
0.0001 = 11 meters
0.001 = 111 meters
0.01 = 1113 meters / 1.1 km
0.1 = 11132 meters / 11.1 km
1.0 = 111319 meters / 111 km
Is this correct?
you can work it out here
http://www.csgnetwork.com/gpscoordconv.html
Coords are usually meter accurate if the seconds are well defined.
Apparently i neeed more rep to comment, anyhow,
No, your last comment is wrong.
coords are usually pim point accurate assuming you have seconds included in your coord.
To work out standard coordinates
xx.xxxxxxxxx
the first two numbers are your degrees
so it will look like this "xx" and "xxxxxxx" for the remainder,
to get minutes, you divide the remainder by 60,
it looks like this now "xx" "xx" "xxxxx"
and what is the decimal of that equation is again divided by 60, to get your seconds.
you may be left with decimals after you work out seconds, but those are fine, the more numbers you have, the more accurate your coord will be.
hope this helps.
The length of a degree in a a projected system (2d system such as the one used by Google Maps) depends on the latitude. Using this simple calculator, you can see that if you change the latitude from 0 degrees to 90 degrees (Equator to North Pole), you get a different length (by up to a kilometer - 110km at North Pole vs 111km at Equator).
Wikipedia has a good summary of the lengths at the equator and those match the ones you typed out. Based on the lat/long that you provided, the accuracy would be around 1 meter.

Inverse radix4 FFT

I have a radix4 FFT that works in forward direction. How different is the inverse fft from froward? I think the only difference is twiddle factors. My code is a modified version of
Source. Can some one enlighten me on this. Thanks.
My output
50 688
-26 -6
-10 -16
6.0 -26
Expected output
50 688
6 -26
-10 -16
-26 -6
Google search "how to compute inverse FFT". Top result:
http://www.adamsiembida.com/node/23
The equation:
IFFT(X) = 1/N * conj(FFT(conj(X)))
conj() means "complex conjugate", which basically just means multiplying all the complex values by -1.
http://en.wikipedia.org/wiki/Complex_conjugate

Programmatically converting out of TOPO50

I've got a set of data that is referenced to NZ TOPO50 locations. I've been trying to work out how to convert them to something useful like WGS84 lat/lon.
I have gone through the documentation at http://www.linz.govt.nz and am still stuck.
An example is "BA32 582206" becomes "36 50 50S 174 46 28E". I have found the NZ-topo-50-map-sheets.xls that has a 5 point mulitpolygon to describe BAS32, but I can not work out how the 58/2 and 20/6 become 174 46 28E and 36 50 50S respectively.
Use the online tool at
http://apps.linz.govt.nz/coordinate-conversion/index.aspx
choose as input coordinates system "New Zealand Transverse Mercator Projection" and as output coordniate system "World Geodetic System 1984"
Your input coordinates should be something like
5413457 North
1528677 East
Than you get your WGS84 Coordinates.