I need more clicking speed using pyautogui, max cps that I got is 75.
The only way to increase the speed I found is by changing pyautogui.PAUSE.
How can I get more, or there is a limit?
There is a way related to pyautogui.PAUSE. I tried it, and it worked pretty fast
from pyautogui import *
from time import *
for i in range(10):
click()
sleep(0)
So, the issue you are having is directly related to the hardware. Even if I set
pyautogui.PAUSE = 0
my cps is still capped at about 32. You aren't doing anything wrong, as far as I know. But if I did get something wrong, feel free to correct me.
EDIT: OK, I'm stupid. Setting pyautogui.PAUSE = 0 is actually LESS efficient (for me) than setting it to be very low. Set pyautogui.PAUSE = 0.00001. I know it's weird, but I get 1000+ CPS
Related
Alternatively, is there a way to force a re-evaluation of a single watch expression?
Say I have the following watch expression:
> Random.splitmix 123 '(Random.natIn 0 100)
When I run this, I might see a result like:
Now evaluating any watch expressions (lines starting with `>`)... Ctrl+C cancels.
5 | > Random.splitmix 123 '(Random.natIn 0 100)
⧩
56
Saving the file again will show the same result every time (it is cached).
I'm not sure if Random results should never be cached (maybe that's still good default behavior to save on computation time), but just wondering what the best workarounds are for this.
debug.clear-cache doesn't work either in this situation, since each time the RNG (Random.splitmix) starts over with the same seed.
Of course, we can manually change the random seed, but this also may not always be desired behavior (and a minor nitpick would be that it involves unnecessary keystrokes and creates additional caching - one cached result per seed, so you have to recall which seeds you already used).
You can clear the expression cache with debug.clear-cache in UCM.
That said, re-evaluating your expression is actually going to give the same result every time! The splitMix function is completely deterministic, so the result you get depends on the seed you provide and on nothing else.
So you could clear the cache here, but it’s not going to do anything.
To get a really random value, you need to use IO which is not allowed in watch expressions. You’d need to provide I/O to your program using run in UCM.
Since the watch expression would somehow need to maintain the random state, which is likely more trouble than it is worth, manually editing the random seed is likely the best compromise. Just re-evaluating will always start from the initial value produced from the given random seed.
Alternatively or conjunctively, evaluating a list of random values may be useful.
I'm running a glmer model with a three-way interaction, which causes me to receive the following warning:
Warning:
In optwrap(optimizer, devfun, start, rho$lower, control = control, :
convergence code 1 from nlminbwrap
The warning is not there when the 3-way interaction is omitted, so I suspect it has to do with model complexity.
Unfortunately, there is no further information about the nature of the convergence issue in the warning (and also not in the model summary), which makes it hard to resolve. [I've tried different optimizers and increasing the nr of function evaluations already].
Is there any way of finding out what precisely convergence code 1 means? Also, I'm wondering whether it is as serious as when it says Model failed to converge? I've been looking for an answer in the R help pages and in the GLMM FAQs, but can't seem to find any. Any help is much appreciated!
Okay, so I've done some reading here with the hope of being able to help out a fellow linguist. Let's start with the model you specified in the comments:
model=glmer(Correct_or_incorrect~ (condition|CASE) + condition + sound + syll + condition:sound + condition:syll + syll:sound + condition:sound:syll, dataMelt, control=glmerControl(optimizer="nlminbwrap"), family = binomial)
The warning message code didn't say anything useful, but convergence code 1 from bobyqa at the very least used to be about exceeding the maximum number of function evaluations. How high did you try and go with the iterations? All you're going to lose is a few hours, so I would try and set it really high and see if the warning message goes away. All you'd be losing is computer time, and I personally think that's a small price to pay for a model that doesn't throw warnings.
You also mentioned that the warning was not there when the 3-way interaction is omitted, and I would be inclined to think that you are right concerning model complexity. If you don't have any specific hypotheses about that interaction I would leave it out and be done, but if you do, I think there are a few options that you haven't mentioned that you have tried yet.
There is a function called allFit() that will fit the model with all available optimizers. This would be a quick and easy way to see if your estimates are roughly the same among all the different optimizers. You run it on an already fitted model, like this:
allFit(model)
There is a good walkthough of using allFit() and it's different arguments here:https://joshua-nugent.github.io/allFit/ This page also has a lot of other potential solutions to your problem.
If you can, I would take advantage of a machine with multiple cores and run allFit with as many iterations as you can swing, and see if any of the optimizers don't give this warning, which is presumably about not minimizing the loss function before the iterations run out.
It seems like this is probably a common issue with ocr. Is there a way to tell tesseract that my 1's are actually 1's?
Hopefully without changing my 7's into 1's in the process.
Note: these are scanned documents and I have no idea what font was used.
if "tesseract" is trainable, try to train it on the font manually. It should solve the problem.
There is another possible solution. Make a small valdiation module after "tesseracting". For all 1s and 7s, double check them using intensity based method. For example try to find corners(feature points) on it and apply KLT with 1 and 7 template and see which one got more positive tracking result. This method is costy but since you will try it on just 2 templates and so small, I do not think it gonna be a big performance decreasing.
if both solution are not possible , try to solve it using post-processing. For example, if it is a student age it would not be 78, it is 18 and so on. However this method is so bad and not a solution at all. but when no other solution is possible you have to do something like it.
from the ActionScript 3.0 documentation:
Global Functions > Math.random()
Returns a pseudo-random number n,
where 0 <= n < 1. The number returned
is calculated in an undisclosed
manner, and is "pseudo-random" because
the calculation inevitably contains
some element of non-randomness.
i'm interested in reading the source code for Math.random() and assume it's the same in other C-based languages like AS3. is it available for viewing?
can anyone explain which elements make the code pseudo-random and why? is it impossible to create a function that returns a truely random value?
There are a whole bunch of Pseudo Random Generator functions - the most common one if you aren't doing high end crypto is probably a linear congruent - see wiki for a description and links to implementation code.
To get real random numbers you can use some web services such as random.org
It uses randomness from atmospheric noise
A lot rely on the system time if I remember rightly since it changes so quick.
If you hit the same sydtem time, get the same random out.
As for true random, not possible, theres no bit in a computer that wasnt set. You could say it would be random if you went into something elses memory space and grabbed something, but thats all deterministic just like the time.
Problem # 305
Let's call S the (infinite) string
that is made by concatenating the
consecutive positive integers
(starting from 1) written down in base
10.
Thus, S =
1234567891011121314151617181920212223242...
It's easy to see that any number will
show up an infinite number of times in
S.
Let's call f(n) the starting position
of the nth occurrence of n in S. For
example, f(1)=1, f(5)=81, f(12)=271
and f(7780)=111111365.
Find Summation[f(3^k)] for 1 <= k <=
13.
How can I go about solving this?
Calculating S to an arbitrary size is deceivingly easy, but as you have probably already found out, not practical, it simply becomes too big .
As is common for the newer Project Euler Problems, brute force simply does not work.
That said, you can still look at S for small values of k and maybe construct a formula that will solve the problem in parts (the first few values are easy to handle in memory). Also, look at Problem 40
Note: remember the one minute rule. (most problems can be solved in a few milliseconds)
My estimate of the running time is O(n2 log n), so this brute force approach is not feasible.
Note that you are supposed to solve Project Euler problems yourself, which IMHO applies in particular to newer problems.