Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
So in my lift in my flat the buttons aren't (being in the UK) labelled: G, 1, 2, 3 etc. Nor in the American fashion of: 1,2,3,4 etc.
They're labelled: 0, 1, 2, 3 i.e. they're index from 0
I though to myself: 'Clearly, if you were to write a goToFloor like function to represent moving between floors, you could do so by the index of the element. Easy!'
And then I realised not all languages start their arrays from 0, some start from 1.
How is this decision made? Is it one of efficiency (I doubt it!)? Ease on new programmers (arguably, anyone who makes the mistake once, won't again)?
I can't see any reason a programming language would deviate from a standard, whether it be 0, 1 or any other number. With that in mind, perhaps it would help to know the first language that had the ability to index and then the first language to break whatever convention was set?
I hope this isn't too 'wishy-washy' a question for SO, I'm very eager to hear the history behind indexing.
When the first programming languages were designed it used to start at 0 because an array maps to memory positions. The array mapped to a memory position and the number was used as an offset to retrieve the adjacent values. According to this, the number should be seen as the distance from the start, not as the order in the array.
From a mathematical point of view it makes sense, because it helps to implement algorithms more naturally.
However 0 is not appealing to humans, because we start counting at 1. It's counter intuitive and this is why some languages decided to "fake" the start arrays at 1. (Note that some of them like VB allows you to choose between 0 and 1 based arrays.)
Interesting information on this topic could be found in this famous Dijkstra article:
Why numbering should start at zero
The first "language" would have been assembler. There an array is simply the memory address of the first element. To access one element in the array, an offset is added. So if the array is at position t0, then t0+0 is the first element, t0+1 is the second element etc. This leads to indexes starting at 0. Later, higher level languages added a better nicer syntax, but the indexes stayed the same way.
Sometimes however, exceptions were made. In Pascal for example, a String is an array of bytes. However the first byte of the array/string stores the length of the string, so that the first letter is stored at index 1. However index 0 still exists and can be used to get said length.
Related
Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 days ago.
Improve this question
I am trying to understand how it is working FFT. So understanding for background phenomenon FFT, I tried to make an example. I saw that, there is one formula about calculation frequency value using the complex numbers. This formula in the below.
Directly we calculate the this complex numbers index, sampling rate and total number of item in the fft list. After this information we can obtain frequency value.
But in this situation we ignore the all complex numbers. So why we do that I don't understand. Is there someone that will give me a clue about that ?
This video has lots of FFT outputs of complex numbers. But this guy directly ignore the complex numbers value and find index (k), as we already know sampling rate value and length of ffth result (N).
And after this calculations, obtain the frequency value. Is it normal ignore all of complex numbers value ? Or Am I missed something about this calculation.
This is my complex numbers value and I want to calculate frequency value using formula by hand. How can I do that ?
Thanks in advance for all comments
I want to tried fft calculations but ignoring the complex numbers give me a stuck
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
What's the complexity of iterator++ operation for stl RB-Tree(set or map)?
I always thought they would use indices thus the answer should be O(1), but recently I read the vc10 implementation and shockly found that they did not.
To find the next element in an ordered RB-Tree, it would take time to search the smallest element in the right subtree, or if the node is a left child and has no right child, the smallest element in the right sibling. This introduce a recursive process and I believe the ++ operator takes O(lgn) time.
Am I right? And is this the case for all stl implementations or just visual C++?
Is it really difficult to maintain indices for an RB-Tree? As long as I see, by holding two extra pointers in the node structure we can maintain a doubly linked list as long as the RB-Tree. Why don't they do that?
The amortized complexity when incrementing the iterator over the whole container is O(1) per increment, which is all that's required by the standard. You're right that a single increment is only O(log n), since the depth of the tree has that complexity class.
It seems likely to me that other RB-tree implementations of map will be similar. As you've said, the worst-case complexity for operator++ could be improved, but the cost isn't trivial.
It quite possible that the total time to iterate the whole container would be improved by the linked list, but it's not certain since bigger node structures tend to result in more cache misses.
The question is as simple as the title implies, alhtough I'm limited by the fact that I'm trying to build a Moodle STACK question and can't therefore access all of Maxima's libraries (nor put expressions on multiple lines in the question variables form-field, among other limitations that I'm probably not even aware of yet). The basic matrix operations like retrieving a row of a matrix seem to be available, though.
Is there a ready-made function for this purpose (the documentation implies there isn't), or do I need to make one of my own? Because of the mentioned limitations, doing it myself might not be possible.
OK, assuming the problem is "pick the nonzero entries out of the first row of the solution matrix." Try this:
sublist (M[1], lambda ([x], notequal (x, 0)));
assuming M is the matrix in question.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Is there any mechanism for storing all the information of a probability distribution (discrete and/or continuous) into a single cell of a table? If so, how is this achieved and how might one go about making queries on these cells?
Your question is very vague. So only general hints can be provided.
I'd say there are two typical approaches for this (if I got your question right):
you can store some complex data into a single "cell" (how you call it) inside a database table. Easiest for this is to use JSON encoding. So you have an array of values, encode that to a string and store that string. If you want to access the values again you query the string and decode it back into an array. Newer versions of MariaDB or MySQL offer an extension to access such values on sql level too, though access is pretty slow that way.
you use an additional table for the values and store only a reference in the cell. This actually is the typical and preferred approach. This is how the relational database model works. The advantage of this approach is that you can directly access each value separately in sql, that you can use mathematical operations like sums, averages and the like on sql level and that you are not limited in the amount of storage space like you are when using a single cell. Also you can filter the values, for example by date ranges or value boundaries.
In the end, taking all together, both approaches offer the same, though they require different handling of the data. The fist approach additionally requires some scripting language on the client side to handle encoding and decoding, but that typically is given anyway.
The second approach us considered cleaner and will be faster in most of the cases, except if you always access to whole set of values at all times. So a decision can only be made when knowing more specific details about the environment and goal of an implementation.
Say we have a distribution in column B like:
and we want to place the distribution in a single cell. In C1 enter:
=B1
and in C2 enter:
=B1 & CHAR(10) & C1
and copy downwards. Finally, format cell C13 with wrap on:
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am working on a system that can create made up fanatsy words based on a variety of user input, such as syllable templates or a modified Backus Naur Form. One new mode, though, is planned to be machine learning. Here, the user does not explicitly define any rules, but paste some text and the system learns the structure of the given words and creates similar words.
My current naïve approach would be to create a table of letter neighborhood probabilities (including a special end-of-word "letter") and filling it by scanning the input by letter pairs (using whitespace and punctuation as word boundaries). Creating a word would mean to look up the probabilities for every letter to follow the current letter and randomly choose one according to the probabilities, append, and reiterate until end-of-word is encountered.
But I am looking for more sophisticated approaches that (probably?) provide better results. I do not know much about machine learning, so pointers to topics, techniques or algorithms are appreciated.
I think that for independent words (an especially names), a simple Markov chain system (which you seem to describe when talking about using letter pairs) can perform really well. Feed it a lexicon and throw it a seed to generate a new name based on what it learned. You may want to tweak the prefix length of the Markov chain to get nicely sounding results (as pointed out in a comment to your question, 2 letters are much better than one).
I once tried it with elvish and orcish names dictionaries and got very satisfying results.