Is it possible to get the Creation order/Insertion order of elements in a TCL array? - tcl

Tcl arrays are great for look up tables, but they are stored as "unordered sets" in theory. Is there anyway to iterate thru them in the order elements were added to the array without adding extra code to track the insertion order yourself?

As far as I'm aware there is not way to get elements from an array back in the order they were added without keeping track of the insertion order yourself. The best way to get the behaviour you want is to move to using a dictionary rather than an array. a dict does retain the order of insertions and as an aded bonus they are much nicer to work with when your passing them into or out of procs.

Related

Rails - json column vs seperate table

I'm currently working on a Ruby on Rails project in which I have objects with association to instructions, meaning, each object, can have zero or more instruction objects that hold some basic data, like title, data (string), and position (for ordering them in the UI). I tried looking up an answer in google but found no relevant answer. the instructions are specific to each object and shouldn't be used for lookup or search of any kind, and therefore I figured I should store them as JSON within the object's own table instead of making a join table. The reason I think of doing so is that join table would explode when there would be many objects and because of that querying for each object's instructions would get longer over time. Is that a reasonable concern for storing this data as a JSON instead of has_many association?
Think of using JSON in an RDBMS as a form of denormalization. There are legitimate reasons to use denormalization, but you must keep in mind that it always optimizes for one type of query at the expense of other types of queries.
For example, in this case you could query your object and it would include the JSON document containing all instructions. But if you wanted to search for a specific instruction, it would be quite complex to search for the row that has a JSON documenting containing a specific instruction. Have you thought about how you would query that?
Using normalized database design, i.e. the join table you mention, allows for more flexibility in queries. You can query the object table, or you can query the instruction table. Either way, then simply join to the other table to the the corresponding rows.
The way to make this more optimized is to use indexes on the columns you want to search. See my presentation How to Design Indexes, Really or the video.
Using JSON creates a lot of complexity that you probably haven't considered. See my presentation How to Use JSON in MySQL Wrong.

Quicker way to reverse strings in list

I have about more than 40,000 strings in a list, and I want to reverse these and then compare with the original list.
I used a for loop to create the reverse list:
for word in list:
reverse_list.append(word[::-1])
Yet, this is taking a lot of time!
Is there a more efficient way, or is waiting something expected? I want to improve efficiency.
If the array is dynamic and has things coming in and out of it, when appending to it you could also append its reversed self into a separate array. Otherwise if this is a one-off computation then its efficiency should't be of much concern.

Why can you only prepend to lists in functional languages?

I have only used 3 functional languages -- scala, erlang, and haskell, but in all 3 of these, the correct way to build lists is to prepend the new data to the front and then reversing it instead of just appending to the end. Of course, you could append to the lists, but that results in an entirely new list being constructed.
Why is this? I could imagine it's because lists are implemented internally as linked lists, but why couldn't they just be implemented as doubly linked lists so you could append to the end with no penalty? Is there some reason all functional languages have this limitation?
Lists in functional languages are immutable / persistant.
Appending to the front of an immutable list is cheap because you only have to allocate a single node with the next pointer pointing to the head of the previous list. There is no need to change the original list since it's only a singly linked list and pointers to the previous head cannot see the update.
Adding a node to the end of the list necessitates modifying the last node to point to the newly created node. Only this is not possible because the node is immutable. The only option is to create a new node which has the same value as the last node and points to the newly created tail. This process must repeat itself all the way up to the front of the list resulting in a brand new list which is a copy of the first list with the exception of thetail node. Hence more expensive.
Because there is no way to append to a list in O(1) without modifying the original (which you don't do in functional languages)
Because it's faster
They certainly could support appending, but it's so much faster to prepend that they limit the API. It's also kind of non-functional to append, as you must then modify the last element or create a whole new list. Prepend works in an immutable, functional, style by its nature.
That is the way in which lists are defined. A list is defined as a linked list terminated by a nil, this is not just an implementation detail. This, coupled with that these languages have immutable data, at least erlang and haskell do, means that you cannot implement them as doubly linked lists. Adding an element would them modify the list, which is illegal.
By restricting list construction to prepending, it means that anybody else that is holding a reference to some part of the list further down, will not see it unexpectedly change behind their back. This allows for efficient list construction while retaining the property of immutable data.

Quickest way to represent array in mysql for retrieval

I have an array of php objects that I want to store into a mysql database table. The only way I can think of is just have a table to represent the object with a unique id and a separate table to store the array (there could be a column array_id and an object_id) but retrieving would require a join I believe which could get expensive. Is there a better way? I don't care much about storage space or insertion time as much as retrieval time.
I don't necessarily need this to work for associative arrays but if the solution could, that would be preferred.
Building a tree structure (read as Array) in mysql can be tricky but it is done all of the time. Almost any forum with nested threads has some mechanism to store a tree structure. As another poster said they do not have to be expensive.
The real question is how you want to use the data. If you need to be able to add/remove data fields from individual nodes in the tree then you can use one of two models
1) Adjacency List Model
2) Modified Preorder Tree Traversal Algorithm
(They sound scary, but it's not that bad I promise.)
The first one listed is probably the more common you will encounter and the second is the one I have begun to use more frequently and has some nice benefits once you wrap your head around it. Take a look at this page--it has an EXCELLENT writeup about both.
http://articles.sitepoint.com/article/hierarchical-data-database
As another poster said though, if you don't need to change the data with queries or search inside the text then use a PHP function to store it in a single field.
$array = array('something'=>'fun', 'nothing'=>'to do')
$storage_array = serialize($array);
//INSERT INTO DB
//DRAW OUT OF DB
$array = unserialize($row['stored_array']);
Presto-changeo, that one is easy.
If you are comfortable with not being able to SQL search through the data within the array, you could add a single column to the table, and serialize the array into it. You would have to deserialize it on retreival.
You could use JSON / PHP serializeation or whatever is more appropriate for the language you're developing in.
Joins don't have to be so expensive - you can define an index.

The data structure for the "order" property of an object in a list

I have a list of ordered objects stored in the database and accessed through an ORM (specifically, Django's ORM). The list is sorted arbitrarily by the user and I need some way to keep track of it. To that end, each object has an "order" property that specifies its order in relation to the other objects.
Any data structure I use to sort on this order field will have to be recreated upon the request, so creation has to be cheap. I will often use comparisons between multiple objects. And insertions can't require me to update every single row in the database.
What data structure should I use?
Heap
As implemented by std::set (c++) with comparison on the value (depends on the value stored), e.g. integers by value, strings by string ordering and so on. Good for ordering and access but not sure when you need to do reordering, is that during insertion, constantly and so on.
You're not specifying what other constraints do you have. If number of elements is know and it is reasonable or possible to keep fixed array in memory then each position would be specified by index. Assumes fixed number of elements.
You did not specify if you want operations you want to perform often.
If you load up values once then access them without modification you can use one algorithm for creation then either build index or transform storage to be able to use fast access ...
Linked list (doubly)?
I don't think you can get constant time for insertions (but maybe amortized constant time?).
I would use a binary tree, then the bit sequence describing the path can be used as your "order property".