Quicker way to reverse strings in list - reverse

I have about more than 40,000 strings in a list, and I want to reverse these and then compare with the original list.
I used a for loop to create the reverse list:
for word in list:
reverse_list.append(word[::-1])
Yet, this is taking a lot of time!
Is there a more efficient way, or is waiting something expected? I want to improve efficiency.

If the array is dynamic and has things coming in and out of it, when appending to it you could also append its reversed self into a separate array. Otherwise if this is a one-off computation then its efficiency should't be of much concern.

Related

Are lists really necessary? Are there benefits?

Is there any benefit to using an ordered list versus just typing out the number and item? It just seems like a waste of time (considering it takes more time to type the tags for each item). When should and shouldn't I use an ordered/unordered list?
Well in most cases you are correct you can use a normal method, however if organization is something you are concerned with then having a list will help you much more. For example, you are able to modify your lists much easier then you would if you had to modify each individual segment of manually typed out information.

Store Miscellaneous Data in DB Table Row

Let's assume I need to store some data of unknown amount within a database table. I don't want to create extra tables, because this will take more time to get the data. The amount of data can be different.
My initial thought was to store data in a key1=value1;key2=value2;key3=value3 format, but the problem here is that some value can contain ; in its body. What is the best separator in this case? What other methods can I use to be able to store various data in a single row?
The example content of the row is like data=2012-05-14 20:07:45;text=This is a comment, but what if I contain a semicolon?;last_id=123456 from which I can then get through PHP an array with corresponding keys and values after correctly exploding row text with a seperator.
First of all: You never ever store more than one information in only one field, if you need to access them separately or search by one of them. This has been discussed here quite a few times.
Assuming you allwas want to access the complete collection of information at once, I recommend to use the native serialization format of your development environment: e.g. if it is PHP, use serialze().
If it is cross-plattform, JSON might be a way to go: Good JSON encoding/decoding libraries exist for something like all environments out there. The same is true for XML, but int his context the textual overhead of XML is going to bite a bit.
On a sidenote: Are you sure, that storing the data in additional tables is slower? You might want to benchmark that before finally deciding.
Edit:
After reading, that you use PHP: If you don't want to put it in a table, stick with serialize() / unserialize() and a MEDIUMTEXT field, this works perfectly, I do it all the time.
EAV (cringe) is probably the best way to store arbitrary values like you want, but it sounds like you're firmly against additional tables for whatever reason. In light of that, you could just save the result of json_encode in the table. When you read it back, just json_decode to get it back into an array.
Keep in mind that if you ever need to search for anything in this field, you're going to have to use a SQL LIKE. If you never need to search this field or join it to anything, I suppose it's OK, but if you do, you've totally thrown performance out the window.
it can be the quotes who separate them .
key1='value1';key2='value2';key3='value3'
if not like that , give your sql example and we can see how to do it.

Hashing Function Vs Loop search

I have an array of structures, ~100 unique elements, and the structure is not large. Due to legacy code, to find an element in this array i use a hash function to find a likely starting point to start looping from until i find the element i want.
My question is this: Is the hash function (and resulting hash table) overkill ?
I know for large tables hashing is essential for good response time, but for a table this size ?
More succinctly, is there a table size below which writing a hash function is unnecessary ?
Language agnostic answers please.
Thanks,
A hash lookup trades better scalability for a bigger up-front computation cost. There is no inherent table size, as it depends on the cost of your hash function. Roughly speaking, if calculating your hash function has the same cost as one hundred equality comparisons, then you could only theoretically benefit from the hash map at some point above one hundred items. The only way to get specific answers for your case is to measure the performance.
My guess though, is that a hash map for 100 items for performance reasons is overkill.
The standard, obvious answer would be/is to write the simplest code that can do the job. Ensure that your interface to that code is as clean as possible so you can replace it when/if needed. Later, if you find that code takes an unacceptable amount of time, replace it with something that improves performance.
On a theoretical basis, however, it's impossible to guess at the upper limit on the number of items for which a linear search will provide acceptable performance for your task. It's also impossible to guess at the number of items for which a hash table will provide better performance than a linear search.
The main point, however, is that it's rarely necessary to try to figure out (especially on a poorly-defined theoretical basis) what data structure would be best for a given situation. In most cases, you just need to make an acceptable decision, and implement it so you can change your mind later if it turns out to be unacceptable after all.
When creating (or after it's created) sort your 'array of unique elements' by their 'key value'. Then use 'binary search' rather than hash or linear search. Now you get a simple implementation, no extra memory usage and good performance.

Is it possible to get the Creation order/Insertion order of elements in a TCL array?

Tcl arrays are great for look up tables, but they are stored as "unordered sets" in theory. Is there anyway to iterate thru them in the order elements were added to the array without adding extra code to track the insertion order yourself?
As far as I'm aware there is not way to get elements from an array back in the order they were added without keeping track of the insertion order yourself. The best way to get the behaviour you want is to move to using a dictionary rather than an array. a dict does retain the order of insertions and as an aded bonus they are much nicer to work with when your passing them into or out of procs.

Quickest way to represent array in mysql for retrieval

I have an array of php objects that I want to store into a mysql database table. The only way I can think of is just have a table to represent the object with a unique id and a separate table to store the array (there could be a column array_id and an object_id) but retrieving would require a join I believe which could get expensive. Is there a better way? I don't care much about storage space or insertion time as much as retrieval time.
I don't necessarily need this to work for associative arrays but if the solution could, that would be preferred.
Building a tree structure (read as Array) in mysql can be tricky but it is done all of the time. Almost any forum with nested threads has some mechanism to store a tree structure. As another poster said they do not have to be expensive.
The real question is how you want to use the data. If you need to be able to add/remove data fields from individual nodes in the tree then you can use one of two models
1) Adjacency List Model
2) Modified Preorder Tree Traversal Algorithm
(They sound scary, but it's not that bad I promise.)
The first one listed is probably the more common you will encounter and the second is the one I have begun to use more frequently and has some nice benefits once you wrap your head around it. Take a look at this page--it has an EXCELLENT writeup about both.
http://articles.sitepoint.com/article/hierarchical-data-database
As another poster said though, if you don't need to change the data with queries or search inside the text then use a PHP function to store it in a single field.
$array = array('something'=>'fun', 'nothing'=>'to do')
$storage_array = serialize($array);
//INSERT INTO DB
//DRAW OUT OF DB
$array = unserialize($row['stored_array']);
Presto-changeo, that one is easy.
If you are comfortable with not being able to SQL search through the data within the array, you could add a single column to the table, and serialize the array into it. You would have to deserialize it on retreival.
You could use JSON / PHP serializeation or whatever is more appropriate for the language you're developing in.
Joins don't have to be so expensive - you can define an index.