Suppose I have one million files in my directory, It'd be a huge consumption in memory if I just did:
x = os.listdir('.')
Suppose for some reason, I chose to use os.walk method and did this to use generator:
def give_object(somepath)
for x in os.walk(somepath):
for j in x[2]:
yield j
os.walk is itself a generator, and I get a value of (cur_directory, sub_directories, list_of_all_files_in_cur_directory) via x. x[2] would contain the 1 million file names. In the second for statement, I'm also yielding a value, making a generator, but, at that point, I've already created a list out from x, So, Would this code really save memory space that would be used for the 1 million items? Or is this not a correct way of using generator for the use case? If so, how should I go about doing it?
Related
I am working with large csv files (>> 10^6 lines) and need a row index for some operations. I need to do comparisons between two versions of the files identifying deletions, so I thought it'd be easiest to include a row index. I guess that the number of lines would quickly render traditional integers inefficient. I'm adverse to the idea of having a column containing lets say 634567775577 in plain text as row index (followed by the actual data row). Are there any best practise suggestions for this scenario?
The resulting files have to remain plain text, so serialisation / sqlite is not an option.
At the moment, I'm considering an index based on the actual row data (for example concatening the row data, converting to base64 or the likes), but would that be more reasonable than a plain integer? There should be no duplicate row within each file, so I guess this could be one way.
Cheers, Sacha
Ps: I heavily modified the initial question for clarification
you can use regular numbers.
Python is not afraid of large numbers :) (well, to the order of magnitude you described...)
just open a python shell and type 10**999 and see that it doesn't overflow or anything.
In Python, there's no actual bit limit for integers. In Python 2, there is technically -- an int is 32 bits, and a long is more than 32 bits. But if you're just declaring the number, that type casting will happen implicitly.
Python 3 just has one type, and it only cares about memory space.
So there's no real reason why you can't use an integer if you really want to add an index.
Python built-in library contains SQLite, a self-contained, one-file-fits-everything DBMS - which contrary to normal perception can be quite performant. If the records are to be consulted by a single application with no concurrency, it compares to specialized DBMS thta requires a separate daemon.
So, essentially, you can dump your CSV to a SQLITE database and create the indices you need - even on all four columns if it is the case.
Here is a template script you could customize to create such a DB -
I guessed the "1000" numbers for number of insert a time, but it could
not be optimal - try tweaking inserting is too slow.
import sqlite3
import csv
inserts_at_time = 1000
def create_and_populate_db(dbfilename, csvfilename):
db = sqlite3.connect(dbfilename)
db.execute("""CREATE TABLE data (col1, col2, col3, col4)""")
for col_name in "col1 col2 col3 col4".split():
db.execute(f"""CREATE INDEX {col_name} ON data ({col_name})""")
with open(csvfilanem) as in_file:
reader = csv.reader(in_file)
next(reader) # skips header row
total = counter = 0
lines = []
while True:
for counter, line in zip(range(inserts_at_time), reader):
lines.append(line)
db.executemany('INSERT INTO data VALUES (?,?,?,?)', lines)
total += counter
counter = 0
lines.clear()
print("\b" * 80, f"Inserted {counter} lines - total {total}")
if counter < inserts_at_time - 1:
break
Is it rational to use topic modelling for a single document or to be more precise is it mathematically okay to use LDA-gibbs method for a single document.If so what should be value of k and seed.
Also what is be the role of k and seed for single as well as large set of documents.
K and SEED are variable of the function LDA (in r studio).
Also let me know if I am wrong anywhere in this question.
To tell about my project ,I am trying to find out the main topics which can be used to represent the content of a single document.
I have already tried using k=4,7,10.Part of my question also is what value of k should be better.
It really depends on the document. A document could be a 700 page book or a single sentence. Your k is also going to be dependent on the document I think you mean the number of topics? If your document is the entire Wikipedia corpus 1500 topics might be appropriate if your document is a list of comments about movies then 20 topics might be appropriate. Optimizing that number can be done using the elbow method check out 17.
Seed can be pretty random it's just a leaver so your results can be replicated - it runs if you leave it blank. I would say try it and check your coherence, eyeball your topics and if it looks right then sure you can train an LDA on one document. A single document should process pretty fast.
Here is an example in python of using seed parameters. My data set is 1,048,575 rows note the seed is much higher:
ldamallet = gensim.models.wrappers.LdaMallet(mallet_path, corpus=bow_corpus,
num_topics=20, alpha =.1, id2word=dictionary, iterations = 1000,
random_seed = 569356958)
I'm currently trying to develop a psychology experiment which involves having 150 .tif images that need to be presented for 0.5sec per trial, full screen, after a noise has been heard. Each image needs to be presented at least once before being presented again. I am completing my experiment within pygame.
I've been thinking about saving all the images into a directory and then pulling them each out one by one. Does this seem like a good idea?
I'm very new to programming and would appreciate any help/links to similar questions. If I'm missing any relevant information please let me know.
Thank you!
Use the glob module to get a list of your image filenames: https://pymotw.com/3/glob/index.html
Loop over this list of filenames, load them with pygame.image.load and append the resulting images/pygame.Surfaces to a list.
Shuffle the list, create an index variable (index = 0) and assign the image at the current index to another variable, e.g. current_image = image_list[index].
Use a timer variable to increment the index and swap the image after the desired time interval check out the pygame.time.get_ticks, set_timer or delta time solutions here.
If the index is >= the length of the list, reset it to 0 and shuffle the list again.
Say I have a very large file (say > 1GB) and I want to add a single character in the middle of it. Is it possible to do this without reading and writing the whole file out? My current solution is this (in pseudocode):
x = 0
chunk = read 4KB chunk x of input file
if chunkToEdit = x, chunk = addCharacter(chunk)
append chunk to the output file
x = x + 1
repeat last 4 steps until input file is fully read
delete input file
move output file to input file
While that works, it results in 1GB of reading, and 1GB of writing to make a single character change. It also requires a spare 1GB of disk space. What I would rather do is modify the part of the file that needs to be changed in place, so I only have to read and write one part of the file (ie 4KB of reading, and 4KB of writing). Is this possible (or a solution better than my one)?
I thought a solution for this could be possible by the OS fragmenting the file and making a new fragment for the changed section, but I don't know if this capability has been written and exposed to developers.
No. Files don't work like that. If you need to change the size of the file then you need to operate from the modification point to the end.
Unless you're using a file format that can handle insertions/deletions cleanly, but it sounds like you aren't.
Adding a single character in the middle necessarily requires shifting everything after this one character by one character. This necessarily requires that you read and write everything from the point of insertion to the end of the file. A way that uses as little memory as possible to do so would be:
i = 0
read last (n byte * i) of file
write back to file shifted by 1 character
i++
repeat until reaching the point of insertion
write single character
In other words: shift everything in chunks of n bytes by one character starting from the end going backwards through the file to the point of insertion, then insert the character. The farther back in the file you want to insert the character, the faster this will be. If you often want to insert near the beginning of the file, this may not be the best solution.
I want to generate unique code numbers (composed of 7 digits exactly). The code number is generated randomly and saved in MySQL table.
I have another requirement. All generated codes should differ in at least two digits. This is useful to prevent errors while typing the user code. Hopefully, it will prevent referring to another user code while doing some operations as it is more unlikely to miss two digits and match another existing user code.
The generate algorithm works simply like:
Retrieve all previous codes if any from MySQL table.
Generate one code at a time.
Subtract the generated code with all previous codes.
Check the number of non-zero digits in the subtraction result.
If it is > 1, accept the generated code and add it to previous codes.
Otherwise, jump to 2.
Repeat steps from 2 to 6 for the number of requested codes.
Save the generated codes in the DB table.
The algorithm works fine, but the problem is related to performance. It takes a very long to finish generating the codes when requesting to generate a large number of codes like: 10,000.
The question: Is there any way to improve the performance of this algorithm?
I am using perl + MySQL on Ubuntu server if that matters.
Have you considered a variant of the Luhn algorithm? Luhn is used to generate a check digit for strings of numbers in lots of applications, including credit card account numbers. It's part of the ISO-7812-1 standard for generating identifiers. It will catch any number that is entered with one incorrect digit, which implies any two valid numbers differ in a least two digits.
Check out Algorithm::LUHN in CPAN for a perl implementation.
Don't retrieve the existing codes, just generate a potential new code and see if there are any conflicting ones in the database:
SELECT code FROM table WHERE abs(code-?) regexp '^[1-9]?0*$';
(where the placeholder is the newly generated code).
Ah, I missed the generating lots of codes at once part. Do it like this (completely untested):
my #codes = existing_codes();
my $frontwards_index = {};
my $backwards_index = {};
for my $code (#codes) {
index_code($code, $frontwards_index);
index_code(reverse($code), $backwards_index);
}
my #new_codes = map generate_code($frontwards_index, $backwards_index), 1..10000;
sub index_code {
my ($code, $index) = #_;
push #{ $index{ substr($code, 0, length($code)/2) } }, $code;
return;
}
sub check_index {
my ($code, $index) = #_;
my $found = grep { ($_ ^ $code) =~ y/\0//c <= 1 } #{ $index{ substr($code, 0, length($code)/2 } };
return $found;
}
sub generate_code {
my ($frontwards_index, $backwards_index) = #_;
my $new_code;
do {
$new_code = sprintf("%07d", rand(10000000));
} while check_index($new_code, $frontwards_index)
|| check_index(reverse($new_code), $backwards_index);
index_code($new_code, $frontwards_index);
index_code(reverse($new_code), $backwards_index);
return $new_code;
}
Put the numbers 0 through 9,999,999 in an augmented binary search tree. The augmentation is to keep track of the number of sub-nodes to the left and to the right. So for example when your algorithm begins, the top node should have value 5,000,000, and it should know that it has 5,000,000 nodes to the left, and 4,999,999 nodes to the right. Now create a hashtable. For each value you've used already, remove its node from the augmented binary search tree and add the value to the hashtable. Make sure to maintain the augmentation.
To get a single value, follow these steps.
Use the top node to determine how many nodes are left in the tree. Let's say you have n nodes left. Pick a random number between 0 and n. Using the augmentation, you can find the nth node in your tree in log(n) time.
Once you've found that node, compute all the values that would make the value at that node invalid. Let's say your node has value 1,111,111. If you already have 2,111,111 or 3,111,111 or... then you can't use 1,111,111. Since there are 8 other options per digit and 7 digits, you only need to check 56 possible values. Check to see if any of those values are in your hashtable. If you haven't used any of those values yet, you can use your random node. If you have used any of them, then you can't.
Remove your node from the augmented tree. Make sure that you maintain the augmented information.
If you can't use that value, return to step 1.
If you can use that value, you have a new random code. Add it to the hashtable.
Now, checking to see if a value is available takes O(1) time instead of O(n) time. Also, finding another available random value to check takes O(log n) time instead of... ah... I'm not sure how to analyze your algorithm.
Long story short, if you start from scratch and use this algorithm, you will generate a complete list of valid codes in O(n log n). Since n is 10,000,000, it will take a few seconds or something.
Did I do the math right there everybody? Let me know if that doesn't check out or if I need to clarify anything.
Use a hash.
After generating a successful code (not conflicting with any existing code), but that code in the hash table, and also put the 63 other codes that differ by exactly one digit into the hash.
To see if a randomly generated code will conflict with an existing code, just check if that code exists in the hash.
Howabout:
Generate a 6 digit code by autoincrementing the previous one.
Generate a 1 digit code by incrementing the previous one mod 10.
Concatenate the two.
Presto, guaranteed to differ in two digits. :D
(Yes, being slightly facetious. I'm assuming that 'random' or at least quasi-random is necessary. In which case, generate a 6 digit random key, repeat until its not a duplicate (i.e. make the column unique, repeat until the insert doesn't fail the constraint), then generate a check digit, as someone already said.)