Why would you convert binary to decimal? - binary

I wasn't able to find this question after searching here and on google. I'm having trouble understanding the reason for converting binary to decimal. I'm sure this sounds incredibly stupid to some, but I have trouble learning if I don't know when/ why I would apply the skill.

Converting for binary to decimal , while not being useful in the computation themselves, is incredibly helpful to print human readable outputs, in low-level languages that don't provide it, or on platforms where some weird encoding not supported by the program makes it impossible to use the standard printing tools.
It is, nonetheless, not very useful more than 90% of the time.

Related

Why IATA chose XML over JSON for NDC services?

May be it looks silly.. :)
But I am trying to find out what was the strong reason for IATA to choose XML over JSON.
I found lot of documentation on NDC XMLs, but couldn't find a proper functionality/feature of XML which is not possible in JSON, considering the advantages of using JSON
I could use some help in understanding it..
Thanks in advance..
A "why" question like this can be interpreted in two ways: (a) is there any historical evidence to show who made the decision, when they made it, and what their arguments were for making it? Or (b) can you think of any good reason why intelligent people might have made this choice?
I can't answer (a), but for (b) you have to look at the timeline. With something as big as IATA, it's likely they've been talking about this for at least 10 and maybe 20 years. Ten years ago, JSON was being promoted as "lightweight" - it didn't carry all the baggage of schemas, validation, transformation and query languages that went with XML. If you were in an airline, you didn't think of that as baggage, you thought of it as essential infrastructure. Being "lightweight" simply isn't a benefit in that world; on the contrary, the word is almost a suggestion that it's not up to doing heavyweight tasks.
Frankly (and at the risk of straying towards question (a)) I think it's very unlikely that the question of using JSON ever came up; they would all have been far too heavily committed to XML before anyone ever took JSON seriously. Don't forget that in 2005 XML was delivering things no one dreamt possible ten years earlier: a robust and rigorous data syntax, completely standardised, with full Unicode support, available at low cost on all platforms, with lots of tools around to support declarative processing. JSON was a new kid on the block, threatening to disrupt the consensus and fragment the industry, and for the people in this kind of community, that wasn't seen as something they needed or wanted.

Handling calculator input via strings VS. arithmetic

I'm creating a GUI application which has an input in form of a calculator keyboard (very similar to your standard iOS or Android calculator app). The common input flow we all familiar with is to append each new digit to the end of a currently displayed number.
From the perspective of both error safety and performance, is it better to concatenate a new digit to the end of an existing string and parse it when a new operation is to be performed — or to represent the number as a float or a double from the beginning?
Nowadays all the tutorials that I see on the topic are suggesting the string approach. Just intuitively, calculators are about numbers, not strings, so I'm hesitating to use this approach right now; however, I can't really find any strong reason against it, since it is much simpler to implement than tracking the significant and the exponent of a number.
This being said, I'm pretty sure that the actual calculators are not using strings. If anyone could describe me how they handle their inputs, I'd be very thankful.
Also, I'm worried that using floating point numbers for input could affect precision — but I'm not sure here either. Would be glad to hear as many opinions as possible. I'm writing the app in Swift, but the same question could apply to a similar app in JavaScript or Kotlin, for example.

Is there a way to measure the complexity of a language?

When I embark on learning a new language (like Java) or a system (like git) it would be very helpful to get an idea of the overall size of the mountain I've got to climb.
Is there some way of measuring code in this way?
E.g. you can measure the height of a mountain and the difficulty of the ascent. Is there something similar for code?
UPDATE
This went some way towards answering what I wanted to know: http://redmonk.com/dberkholz/2013/03/25/programming-languages-ranked-by-expressiveness/
Not easily. You'd probably be best to just do a few searches - you are likely to find people saying things like HTML is fairly simple and C++ is hard, though I understand this is not a reliable method.
Ultimately, how hard something is to learn varies a lot between individuals and a languages similarity to what you already know is also likely to have an impact.
If it helps, some general rules I have noticed are as follows:
Fully interpreted languages such as JavaScript and Python are very relaxed about variable types, which can make them easier to learn.
C based languages (C, C++, Java, etc.) all have very similar syntax, if you can program one, the others shouldn't be too hard learn.
Languages like Python are designed to read like human speech, this can make them far easier to understand, though they are often very different from other languages and can occasionally be inconsistent.
More traditional compiled languages like C and C++ are generally very strict on syntax (but flexible with formatting) which can give them a steeper learning curve, but I would recommend them to a beginner as once you are used to strict syntax, it is easy to adapt to less strict syntax than vice versa.

What part of STL knowledge is must for a C++ developer?

I have good knowledge of C++ but never dwell into STL. What part of STL I must learn to improve productivity and reduce defects in my work?
Thanks.
I have good knowledge of C++
With all due respect, but no – you don’t. The standard library, or at least large parts of it (especially the subset known as “STL”) is a fundamental part of C++. Without knowledge of it you don’t know very much about C++ at all.
In fact, much of the modern design of C++ (essentially everything since the 98 version) was guided by design considerations stemming from the standard library, and much of the changes in the language since then are changes to the standard library. If you take a look at the official C++ language description a good part of the document is concerned with the library.
Usually the first reaction (at least in my opinion, of course) for people who have not worked with the STL before is to get upset with all the template code. So I would start by studying a little bit on this subject.
In the case you already know template fundamentals I would recommend taking a brief look over an STL design document. This is actually the second stage of hassle for people not yet familiar with it. The reason for this is that the STL is not designed under a typical object oriented paradigm, but under the generic programming paradigm.
With this in mind, a good start could be this introductory article. The terms used throughout the STL components are explained there. Please notice that is a relatively old text focused on the SGI implementation (which predates the C++ standard and incorrectly mentions, for example, the hash based containers as part of it). However, the theory is still valid.
Well, if you already know most things I've said up to this point, just jump directly to the topcis the others provided.
You mention about improving your productivity and reduce defects. There are general guidelines that you can use for this. I assume c++11, and I mention a bit more than stl (smart pointers):
Use containers, they will manage memory for you. You get rid of new for C arrays and later having to delete them, for example.
For dynamic arrays, use std::vector. You also have hashtables in std::unordered_map and balanced trees with std::map. There are more containers, take a look here.
Use std::array instead of plain C arrays whenever you can: they never decay to pointers when passing as arguments to functions, which can avoid very disgusting bugs.
Use smart pointers and forget forever for a naked new and its matching delete in code.
This can reduce errors far more than you would expect, especially in the presence of exceptions.
Use std::make_shared when possible. You can use it to allocate a shared_ptr directly as an argument to a function that takes a std::shared_ptr. With a naked new this is not possible.
Use algorithms instead of hand-coded loops. The code will be far more readable and usually more performant.
With this advice your code should look closer (but not necessarily equal or semantically equivalent) to C# or Java, in which manual memory management disappears. This is even better than garbage collection in many scenarios, since you have deterministic guarantees for when a resource will be freed.
I'd say the algorithms from <algorithm> will really clean up your code and at the same time make your code more concise.
Obviously, knowledge of all the containers will help you to optimize the bottlenecks of your code caused by a certain choice of container which is not optimal (but be sure to profile first).
These are pretty much the basics and they will help you a lot to make more robust code.
After that you can delve into smart pointers like std::shared_ptr which are almost always better than regular pointers (in my case, at least).
I think can start with containers(vector, list) and alghorithms(binary search, sort).
And as Jesse Emond wrote, the more you know, the better you live)))

OCR lib for math formulas

I need an open OCR library which is able to scan complex printed math formulas (for example some formulas which were generated via LaTeX). I want to get some LaTeX-like output (or just some AST-like data).
Is there something like this already? Or are current OCR technics just able to parse line-oriented text?
(Note that I also posted this question on Metaoptimize because some people there might have additional knowledge.)
The problem was also described by OpenAI as im2latex.
SESHAT is a open source system written in C++ for recognizing handwritten mathematical expressions. SESHAT was developed as part of a PhD thesis at the PRHLT research center at Universitat Politècnica de València.
An online demo:http://cat.prhlt.upv.es/mer/
The source: https://github.com/falvaro/seshat
Seshat is an open-source system for recognizing handwritten mathematical expressions. Given a sample represented as a sequence of strokes, the parser is able to convert it to LaTeX or other formats like InkML or MathML.
According to the answers on Metaoptimize and the discussion on the Tesseract mailinglist, there doesn't seem to be an open/free solution yet which can do that.
The only solution which seems to be able to do it (but I cannot verify as it is Windows-only and non-free) is, like a few other people have mentioned, the InftyProject.
InftyReader is the only one I'm aware of. It is NOT free software (it seems the money goes to a non-profit org, IIRC).
http://www.sciaccess.net/en/InftyReader/
I don't know why PDF can't have metadata in LaTeX? As in: put the LaTeX equation in it! Is this so hard? (I dunno anything about PDF syntax, but I imagine it can be done).
LaTeX syntax is THE ONE TRIED AND TRUE STANDARD for mathematics notation. It seems amazingly stupid that folks that produced MathML and other stuff don't take this in consideration. InftyReader generates MathML or LaTeX syntax.
If I want HTML (pure) I then use TTH to read the LaTeX syntax. Just works.
ABBYY FineReader (a great OCR program) claims you can train the software for Math, but this is immensely braindead (who has the time?)
And Unicode has lots of math symbols. That today's OCR readers can't grok them shows the sorry state of software and the brain deficit in this activity.
As to "one symbol at a time", TeX obviously has rules as to where it will place symbols. They can't write software that know those rules?! TeX is even public domain! They can just "use it" in their comercial products.
Check out "Web Equation." It can convert handwritten equations to LaTeX, MathML, or SymbolTree. I'm not sure if the engine is open source.
Considering that current technologies read one symbol at a time (see http://detexify.kirelabs.org/classify.html), I doubt there is an OCR for full mathematical equations.
Infty works fairly well. My former company integrated it into an application that reads equations out loud for blind people and is getting good feedback from users.
http://www.inftyproject.org/en/download.html
Since the output from math OCR for complex formulas will likely have bugs -- even humans have trouble with it -- you will have to proofread th results, at least if they matter. The (human) proofreader will then have to correct the results, meaning you need to have a math formula editor. Given the effort needed by humans, the probably limited corpus of complex formulas, you might find it easier to assign the task to humans.
As a research problem, reading math via OCR is fun -- you need a formalism for 2-D grammars plus a symbol recognizer.
In addition to references already mentioned here, why not google for this? There is work that was done at Caltech, Rochester, U. Waterloo, and UC Berkeley. How much of it is ready to use out of the box? Dunno.
As of August 2019, there are a few options, depending on what you need:
For converting printed math equations/formulas to LaTex, Mathpix is absolutely the best choice. It's free.
For converting handwritten math to LaTex or printed math, MyScript is the best option, although its app costs a few dollars.
You know, there's an application in Win7 just for that: Math Input Panel. It even handles handwritten input (it's actually made for this). Give it a shot if you have Win7, it's free!
there is this great short video: http://www.youtube.com/watch?v=LAJm3J36tLQ
explaining how you can train your Fine Reader to recognize math formulas. If you use Fine Reader already, better to stick with one tool. Of course it is not free ware :(