I am looking for a mathematical function capable of flipping the trigonometric circle around the "axis" located on the 45° vector (or pi/4 radians).
Such as :
|---------|---------|
| x | f(x) |
|---------|---------|
| 0 | 90 |
| 45 | 45 |
| 90 | 0 |
| 135 | 315 |
| 180 | 270 |
| 225 | 225 |
| 270 | 180 |
| 315 | 135 |
|---------|---------|
Just to give a few examples.
So basically I need to turn a compass rose :
into a trigonometric circle :
I only found things like "180 - angle" but that is not the kind of rotation I'm looking for.
Is it possible ?
The main difficulty in your problem is the "jump" in the result, where f(90) should be 0 but f(91) should be 359. A solution is to use a jumpy operator--namely, the modulus operator, which is often represented by the % character.
In a language like Python, where the modulus operator always returns a positive result, you could use
def f(x):
return (90 - x) % 360
Some languages can return a negative result if the number before the operator is negative. You can treat that by adding 360 before taking the modulus, and this should work in all languages with a modulus operator:
def f(x):
return (450 - x) % 360
You can demonstrate either function with:
for a in range(0, 360, 45):
print(a, f(a))
which prints what was desired:
0 90
45 45
90 0
135 315
180 270
225 225
270 180
315 135
If your language does not have the modulus operator, it can be simulated by using the floor or int function. Let me know if you need a solution using either of those instead of modulus.
Related
I want to perform OCR on images like this one:
It is a table with numerical data with colons as decimal separators.
It is not noisy, contrast is good, black text on white background.
As an additional preprocessing step, in order to get around issues with the frame borders, I cut out every cell, binarize it, pad it with a white border (to prevent edge issues) and pass only that single cell image to tesseract.
I also looked at the individual cell images to make sure the cutting process works as expected and does not produce artifacts. These are two examples of the input images for tesseract:
Unfortunately, tesseract is unable to parse these consistently. I have found no configuration where all 36 values are recognized correctly.
There exist a couple similar questions here on SO and the usual answer is a suggestion for a specific combination of the --oem and --psm parameters. So I wrote a python script with pytesseract that loops over all combinations of --oem from 0 to 3 and all values of --psm from 0 to 13 as well als lang=eng and lang=deu. I ignored the combinations that throw errors.
Example 1: With --psm 13 --oem 3 the above "1,7" image is misidentified as "4,7", but the "57" image is correctly recognized as "57".
Example 2: With --psm 6 --oem 3 the above "1,7" image is correctly recognized as "1,7", but the "57" image is misidentified as "o/".
Any suggestions what else might be helpful in improving the output quality of tesseract here?
My tesseract version:
tesseract v4.0.0.20190314
leptonica-1.78.0
libgif 5.1.4 : libjpeg 8d (libjpeg-turbo 1.5.3) : libpng 1.6.34 : libtiff 4.0.9 : zlib 1.2.11 : libwebp 0.6.1 : libopenjp2 2.2.0
Found AVX2
Found AVX
Found SSE
Solution
Divide the image into the 5-different row
Apply division-normalization to each row
Set psm to 6 (Assume a single uniform block of text.)
Read
From the original image, we can see there are 5 different rows.
Each iteration, we will take a row, apply normalization and read.
We need to understand how to set image indexes carefully.
import cv2
from pytesseract import image_to_string
img = cv2.imread("0VXIY.jpg")
gry = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
(h, w) = gry.shape[:2]
start_index = 0
end_index = int(h/5)
Question Why do we declare start and end indexes?
We want to read a single row in each iteration. Lets see in an example:
The current image height and width are 645 and 1597 pixels.
Divide the images based on indexes:
start-index
end-index
0
129
129
258 (129 + 129)
258
387 (258 + 129)
387
516 (387 + 129)
Lets see whether the images are readable?
start-index
end-index
image
0
129
129
258
258
387
387
516
Nope, they are not suitable for reading, maybe a little adjustment might help us. Like:
start-index
end-index
image
0
129 - 20
109
218
218
327
327
436
436
545
545
654
Now they are suitable for reading.
When we apply the division-normalization to each row:
start-index
end-index
image
0
109
109
218
218
327
327
436
436
545
545
654
Now when we read:
1,7 | 57 | 71 | 59 | .70 | 65
| 57 | 1,5 | 71 | 59 | 70 | 65
| 71 | 59 | 1,3 | 57 | 70 | 60
| 71 | 59 | 56 | 1,3 | 70 | 60
| 72 | 66 | 71 | 59 | 1,2 | 56
| 72 | 66 | 71 | 59 | 56 | 4,3
Code:
import cv2
from pytesseract import image_to_string
img = cv2.imread("0VXIY.jpg")
gry = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
(h, w) = gry.shape[:2]
# print(img.shape[:2])
start_index = 0
end_index = int(h/5) - 20
for i in range(0, 6):
# print("{}->{}".format(start_index, end_index))
gry_crp = gry[start_index:end_index, 0:w]
blr = cv2.GaussianBlur(gry_crp, (145, 145), 0)
div = cv2.divide(gry_crp, blr, scale=192)
txt = image_to_string(div, config="--psm 6")
print(txt)
start_index = end_index
end_index = start_index + int(h/5) - 20
I am trying to query out only the root_cause that is more than 72 hours and when it find 72 hours or more it will add up.. For example
I have root cause A = 78 hours and Root cause B = 100 hours since these two is more than 72 it should add up 178 hours as "MNPT". Anything that is less than 72 add up and make up routine NPT
I am using derived table query out but the outcome still display the hours including those that less than 72
Select operation_uid, sum (npt_duration) as mnpt from fact_npt_root_cause where npt_duration>72 group by root_cause_code having sum (npt_duration)>72
See this table
|ROOT CAUSE CODE | NPT Duration |
| | |
|A | 23 |
|B | 78 |
|C | 45 |
|D | 100 |
|E | 90 |
When the root cause value is more than 72 hours => then add up those value for example
root cause code B, D, E = 78 + 100 + 90 = 268 as MNPT
When the root cause value is less than 72 hours => then add up the value as 23 + 45 = 68 as routine NPT
I'm not sure of what you want to do but querying operation_uid and grouping by root_cause_code assumes that you always has the same operation_uid for a given root_cause_code... Don't you rather mean :
SELECT operation_uid,
sum (npt_duration) as mnpt
FROM fact_npt_root_cause
WHERE npt_duration>72
GROUP by operation_uid, root_cause_code
HAVING SUM (npt_duration)>72;
I have an exam for a university course shortly, and upon reviewing one of my assignments I have come to realize that I don't understand why I have lost marks/how to do a couple of questions. Hopefully someone can shed some light on the subject for me! The questions were as follows:
Use K-Maps to simplify the following boolean functions (Note that d() represents a don't care minterm):
1.) F(w, x, y, z) = ∑(1,3,5,7,11,12,13,15)
My answer:
Prime Implicants: yz, w'z, xz, wxy'
Essential Prime Implicants: yz, w'z, wxy'
Possible Minimal Expression(s): yz + w'z + wxy'
Answer sheet (professor's answer):
Prime Implicants: yz, w'z, xz, wxy'
Essential Prime Implicants: Same as prime implicants
Possible Minimal Expression(s): yz + w'z + xz + wxy'
2.) F(w, x, y, z) = ∑(1,2,5,7,12) + d(0,9,13)
My answer:
Prime Implicants: w'x'z', y'z, w'xz, wxy', w'x'y'
Essential Prime Implicants: w'x'z', w'xz, wxy'
Possible Minimal Expression(s): w'x'z' + w'xz + wxy'
Answer sheet (professor's answer):
Prime Implicants: w'x'z', y'z, w'xz, wxy', w'x'y'
Essential Prime Implicants: w'x'z', w'xz, wxy'
Possible Minimal Expression(s): w'x'z' + w'xz + wxy' + y'z
I suppose I should add that I asked my professor after he returned my assignment to me if he had made a mistake and explained my point of view. He seemed pretty certain that he was correct, but couldn't really explain why because he speaks poor English (well, that's university for you..).
Thanks in advance to anyone who can help! This has been quite a task to try to figure out on my own!
1.) You are correct: XY is no essential prime implicant. It does not cover any minterm which is not covered by other prime implicants. Thus, is can be removed from the solution.
The Karnaugh map might help to see this more clearly:
wx
00 01 11 10
+---+---+---+---+
00 | 0 | 0 | 1 | 0 |
+---+---+---+---+
01 | 1 | 1 | 1 | 0 |
yz +---+---+---+---+
11 | 1 | 1 | 1 | 1 |
+---+---+---+---+
10 | 0 | 0 | 0 | 0 |
+---+---+---+---+
I am not sure what is meant by "possible minimal expressions". If you enumerate all potential encircled blocks in the map, XY would also be one.
2.) Your solution and the official solution are the same.
Again - like in 1.) - the solution sheet also includes the non-essential terms as "possible minimal expressions".
F = w x y' + w' x z + w' x' z' + w' x' y'
1.) F(w, x, y, z) = ∑(1,3,5,7,11,12,13,15)
wx
00 01 11 10
+---+---+---+---+
00 | 0 | 0 | 1 | 0 |
+---+---+---+---+
01 | 1 | 1 | 1 | 0 |
yz +---+---+---+---+
11 | 1 | 1 | 1 | 1 |
+---+---+---+---+
10 | 0 | 0 | 0 | 0 |
+---+---+---+---+
note : here essential prime implicants are the prime implicants which are formed by
wxyz
1100
1101
result is wxy'
if you compute the prime implicant which is formed 3, 7, 11 and 15
wxyz
0011
0111
1111
1011
result is yz
if you compute the prime implicant which is formed 1, 5, 3 and 7
wxyz
0001
0101
0011
0111
result is w'z
so essential prime implicants are wxy', yz and w'z
xz is not a essential prime implicant because prime implicant which is formed by 5, 13, 7, and 15 is redundant prime implicant
Simple precision issue with mysql list of numbers do not calculate the same with sum as a single calculation. (How does one handle this senario?) There is a one(1) cent difference?
Sample of rows:
Select qty, (qty*21.25)
6.5 | 138.125
0.5 | 10.625
0.5 | 10.625
0.25 | 5.3125
1 | 21.25
2 | 42.5
1 | 21.25
2 | 42.5
2.5 | 53.125
2.5 | 53.125
2 | 42.5
3 | 63.75
3 | 63.75
3.5 | 74.375
Sample 2:
Select sum(qty), sum(amount)
30.25 | 642.8175
Sample 3:
Select 30.25*21.25
642.8125
Since the answer was so vague. Let try this. Can anyone explain why the mysql statement produces the wrong results.
SELECT 6.875+3.125
10.00
Shouldn't this be
10.01
Read this article. Floating point numbers are not stored as exact values, therefore could be problems with calculation precision
I have a Measurements table as follows:
SourceId : int
TimeStamp: date/time
Measurement: int
Sample data looks like this (more on the asterisks below):
SID| TimeStamp | Measurement
10 | 02-01-2011 12:00:00 | 30 *
10 | 02-01-2011 12:10:00 | 30
10 | 02-01-2011 12:17:00 | 32 *
10 | 02-01-2011 12:29:00 | 30 *
10 | 02-01-2011 12:34:00 | 30
10 | 02-01-2011 12:39:00 | 35 *
10 | 02-01-2011 12:46:00 | 36 *
10 | 02-01-2011 12:39:00 | 36
10 | 02-01-2011 12:54:00 | 36
11 | 02-01-2011 12:00:00 | 36 *
11 | 02-01-2011 12:10:00 | 36
11 | 02-01-2011 12:17:00 | 37 *
11 | 02-01-2011 12:29:00 | 38 *
11 | 02-01-2011 12:34:00 | 38
11 | 02-01-2011 12:39:00 | 37 *
11 | 02-01-2011 12:46:00 | 36 *
11 | 02-01-2011 12:39:00 | 36
11 | 02-01-2011 12:54:00 | 36
I need a LINQ query that will return only the rows when the Measurement value is different from the prior row having the same SourceId (i.e. each row marked with an asterisk). The table should be sorted by SourceId, then TimeStamp.
The data from the query will be used to plot a graph where each SourceId is a series. The source table has several million rows and the repeating measurements are in the thousands. Since these repeating measurement values don't make any difference to the resulting graph I'd like to eliminate them before passing the data to my graph control for rendering.
I have tried using Distinct() in various ways, and reviewed the Aggregate queries here http://msdn.microsoft.com/en-us/vcsharp/aa336746 but don't see an obvious solution.
Sometimes a plain old foreach loop will suffice.
var finalList = new List<MyRowObject>();
MyRowObject prevRow = null;
foreach (var row in myCollection)
{
if (prevRow == null || (row.SID != prevRow.SID || row.Measurement != prevRow.Measurement))
{
finalList.Add(row);
}
prevRow = row;
}
Personally, I like the DistinctUntilChanged extension method that is included in the Rx Extensions library. It's very handy. As is the rest of the library, by the way.
But I do understand, you might not want to add a whole new dependency just for this. In this case, I propose Zip:
sequence.Take(1).Concat(
sequence.Zip( sequence.Skip(1), (prev,next) => new { item = next, sameAsPrevious = prev == next } )
.Where( (x,index) => !x.sameAsPrevious )
.Select( x => x.item )
)
There's no way to do this in a single query in sql. Ergo there's no way to do this in a single query in linq to sql.
The problem is you need to compare each row to the "next" row. That's just not something that sql does well at all.
Look at the first five rows:
10 | 02-01-2011 12:00:00 | 30 *
10 | 02-01-2011 12:10:00 | 30
10 | 02-01-2011 12:17:00 | 32 *
10 | 02-01-2011 12:29:00 | 30 *
10 | 02-01-2011 12:34:00 | 30
You want to keep 2 records with 30 and remove 2 records with 30. That rules out grouping.