Using Sikuli Finder() to search screenshot with icon, providing cached like response when used in a loop - sikuli

(Using Sikuli IDE -288 20/04/19 on Windows 10)
I am currently having issues with a portion of code that runs correctly the first time around, but the second time where the function is looped instead of overwriting information created in the first iteration, it is somehow using the old information.
A function called selectRewards() is called, and it takes a few screenshots of the reward region over a few seconds to gather a useable still of an animation, the file name is numerically incremented. Then the function creates a Finder using the screenshots starting with screenshot 1. The Finder, and image I want to check against is passed into a search() function where it should be using the passed finder and image to find matches. It checks for all defined images in screenshot1, screenshot2 etc. until matches are found. And the matches are selected on the screen using coordinates from the screenshot image.
This all works well in the first iteration of the selectRewards(), it cycles through the screenshots and finds the images on a stable screenshot, but when the function is called again the same exact "found" results are returned, and clicks are exactly the same even when the images don't exist in the screenshots (I've even deleted the screenshots at the end of the first loop to try and clear any incorrect information being sent to the finder.
I've tried to pull the section out to share in a cleaner way, and it still provides the same issue. Any help and advice would be deeply appreciated.
(Though currently having even stranger issues with the code, since having the main script open in a tab on the IDE and the new script in another - neither running - if I run the snippet script it will use the coordinates/image finds from a previous run of the scripts). Can there be some sort of memory issue or caching in windows? ALT+SHIFT+R to restart the IDE normally helps clear the issue.
Settings.MoveMouseDelay = 0.5
#Define Regions
rewardRegion = Region(536,429,832,207)
#Define Images
searchCoupons = Pattern("coupons.png").similar(0.85)
searchAdvanced = Pattern("2011.png").similar(0.85)
searchAdvancedFrag = Pattern("2012.png").similar(0.85)
matchesFound = False
def search(image,rewardGlimpse, descr = ""):
print ("##### searching for: (%s) %s" % (image, descr))
rewardGlimpse.findAll(image) # find all matches (using passed finder variable & image variable)
matches = [] # an empty list to store the matches
while rewardGlimpse.hasNext(): # loop as long there is a first and more matches
matches.append(rewardGlimpse.next()) # access next match and add to matches
# now we have our matches saved in the list matches
print(" Does FindAll have next? (should be false):" + str(rewardGlimpse.hasNext()))
print(" Found matches:" + str(len(matches)))
if len(matches) > 0:
global matchesFound
matchesFound = True
obtainedReward = str(descr)
print(" Match found should be true " + str(matchesFound) + ". Found: "+obtainedReward)
# we want to use our matches
for m in matches:
#Find x & y location of rewards in screenshot
matchx = m.x
matchy = m.y
#Append them to the reward region to line it up.
newx = rewardRegion.getX()+matchx
newy = rewardRegion.getY()+matchy
rewardHover = Location(newx, newy)
#click the found reward location
click(rewardHover)
wait(1)
def selectRewards():
#---- Save Incremental Screenshots
wait(1)
capture(rewardRegion,"screenshot1.png")
wait(0.5)
capture(rewardRegion,"screenshot2.png")
wait(0.5)
capture(rewardRegion,"screenshot3.png")
wait(0.5)
capture(rewardRegion,"screenshot4.png")
wait(0.5)
capture(rewardRegion,"screenshot5.png")
wait(0.5)
capture(rewardRegion,"screenshot6.png")
wait(0.5)
#----- Test the screenshots
snum = 1 #screenshot file number
while True:
global matchesFound
if matchesFound == True:
print("Rewards Found - breaking search loop")
matchesFound = False
break
else:
pass
#Start with _screenshot1.png, increment snum.
screenshotURL = "_screenshot"+str(snum)+".png"
rewardGlimpse = Finder(screenshotURL) #Setup the Finder
print("Currently searching in: " + str(screenshotURL))
#Pass along the image to search, the screenshots Finder, and description.
search(searchCoupons,rewardGlimpse, "Coupons")
search(searchAdvanced,rewardGlimpse, "Advanced Recruit Proof")
search(searchAdvancedFrag,rewardGlimpse, "Advanced Recruit Fragments")
snum = snum + 1
if snum>6:
break
while True: #All rewards available this round are collected
if exists("1558962266403.png"):
click("1558962266403.png")
#confirm
break
else:
pass
print("No reward found at this point.")
print("Matches Found at No Reward Debug: " +str(matchesFound))
#Needed matches not found, selecting random reward.
hover("1558979979033.png")
click("1558980645042.png")
#matchesFound = False #Toggle back to False
#print("Matches found: Variable Value(Should be false)" + str(matchesFound))
def main():
i = 0
SSLoops = 2
while i < 2:
print("Loop #" + str(i+1) + "/"+ str(SSLoops))
print("--------------")
if i == 1: #remove this if statement for live
click("1559251066942.png") #switches spoofed html pages to show diff rewards
selectRewards()
i = i + 1
if __name__ == '__main__':
main()
Loop one of calling selectRewards() is correct, there were 3 images in the reward region that matched the things to search for. But the second loop is incorrect, only one of the matching images were there and wasn't in the same exact position. The script clicked in the 3 locations of the previous loop during the second time around.
Message log:
====
Loop #1/2
--------------
Currently searching in: _screenshot1.png
##### searching for: (P(coupons.png) S: 0.85) Coupons
Does FindAll have next? (should be false):False
Found matches:1
Match found should be true True. Found: Coupons
[log] CLICK on L[603,556]#S(0) (586 msec)
##### searching for: (P(2011.png) S: 0.85) Advanced Recruit Proof
Does FindAll have next? (should be false):False
Found matches:1
Match found should be true True. Found: Advanced Recruit Proof
[log] CLICK on L[653,556]#S(0) (867 msec)
##### searching for: (P(2012.png) S: 0.85) Advanced Recruit Fragments
Does FindAll have next? (should be false):False
Found matches:1
Match found should be true True. Found: Advanced Recruit Fragments
[log] CLICK on L[703,556]#S(0) (539 msec)
Rewards Found - breaking search loop
[log] CLICK on L[90,163]#S(0) (541 msec)
Loop #2/2
--------------
[log] CLICK on L[311,17]#S(0) (593 msec)
Currently searching in: _screenshot1.png
##### searching for: (P(coupons.png) S: 0.85) Coupons
Does FindAll have next? (should be false):False
Found matches:1
Match found should be true True. Found: Coupons
[log] CLICK on L[603,556]#S(0) (617 msec)
##### searching for: (P(2011.png) S: 0.85) Advanced Recruit Proof
Does FindAll have next? (should be false):False
Found matches:1
Match found should be true True. Found: Advanced Recruit Proof
[log] CLICK on L[653,556]#S(0) (535 msec)
##### searching for: (P(2012.png) S: 0.85) Advanced Recruit Fragments
Does FindAll have next? (should be false):False
Found matches:1
Match found should be true True. Found: Advanced Recruit Fragments
[log] CLICK on L[703,556]#S(0) (539 msec)
Rewards Found - breaking search loop
[log] CLICK on L[304,289]#S(0) (687 msec)
====

RaiMan from SikuliX:
ok, the reason is the image caching internally based on filenames.
When a new Finder is created with an image filename already in cache, the cached version is used.
So you either might switch off caching globally:
Settings.setImageCache(0)
or add:
Image.reset()
at the beginning of selectRewards()
or add the loop count to the filename of the captured images (take care: memory is constantly increased!)
BTW: when selecting another tab in the IDE, the images from the previously selected tab are uncached automatically.

Related

Angr for a HTB challenge, no solution found

I'm new to RE. I'm trying to solve a HackTheBox challenge called RAuth, with angr. I understand how to analyze and solve this challenge differently, without angr, but I really want to understand what is wrong with my angr script, or maybe what is the reason why angr is not feasible for this (and similar) case?
The application is easy, after it starts it's requesting the password from stdin, and exits if the password is incorrect:
>:~/challenges/rauth$ ./rauth
Welcome to secure login portal!
Enter the password to access the system:
wqeqwwqewqewqeqwewqeqweqweqweqweqwewqeqwe
You entered a wrong password!
When I run my script it works for around 15 minutes and dies with one of two errors:
"Killed"
"IndexError: tuple index out of range"
My angr script:
#!/usr/bin/env python
#coding: utf-8
import angr
import claripy
import time
import sys
def is_successful(st):
output = st.posix.dumps(sys.stdout.fileno())
if b'Successfully Authenticated' in output:
return True
return False
def should_abort(state):
output = state.posix.dumps(sys.stdout.fileno())
return b'You entered a wrong password!' in output
def main(round):
print("Checking "+str(round))
p = angr.Project('rauth',auto_load_libs=False)
flag_chars = [claripy.BVS('flag_%d' % i, 8) for i in range(round)]
flag = claripy.Concat(*flag_chars + [claripy.BVV(b'\n')])
st = p.factory.full_init_state(
args=['./rauth'],
add_options=angr.options.unicorn,
stdin=angr.SimPackets(name='stdin', content=[(flag, 32)])
#stdin=flag,
)
st.options.add(angr.options.SYMBOL_FILL_UNCONSTRAINED_MEMORY)
st.options.add(angr.options.SYMBOL_FILL_UNCONSTRAINED_REGISTERS)
for k in flag_chars:
st.solver.add(k >= ord(" "))
st.solver.add(k <= ord("~"))
sm = p.factory.simulation_manager(st)
#sm.explore(find=is_successful,avoid=should_abort)
sm.explore(find=lambda s: b"Successfully" in s.posix.dumps(1),avoid=lambda s: b"wrong" in s.posix.dumps(1))
if len(sm.found) > 0:
for solution_state in ex.found:
print("[>>] {!r}".format(solution_state.solver.eval(flag,cast_to=bytes)))
else:
print("[>>] no solution found :(")
if __name__ == "__main__":
print(main(32))
Am I using angr for the case when it's not applicable? What am I missing?
A also tried to play with options, like removing unicorn, enabling auto_load_libs, using or disabling lambdas etc. No success.

Why am I not being able of detecting a None value from a dictionary

I have seen this issue many times happening to many people (here). I am still struggling trying to validate whether what my dictionary captures from a JSON is "None" or not but I still get the following error.
This code is supposed to call a CURL looking for the 'closed' value in the 'status' key until it finds it (or 10 times). When payment is done by means of a QR code, status changes from opened to closed.
status = (my_dict['elements'][0]['status'])
TypeError: 'NoneType' object is not subscriptable
Any clue of what am I doing wrong and how can I fix it?
Also, if I run the part of the script that calls the JSON standalone, it executes smoothly everytime. Is it anything in the code that could be affecting the CURL execution?
By the way, I have started programming 1 week ago so please excuse me if I mix concepts or say something that lacks of common sense.
I have tried to validate the IF with "is not" instead of "!=" and also with "None" instead of "".
def show_qr():
reference_json = reference.replace(' ','%20') #replaces "space" with %20 for CURL assembly
url = "https://api.mercadopago.com/merchant_orders?external_reference=" + reference_json #CURL URL concatenate
headers = CaseInsensitiveDict()
headers["Authorization"] = "Bearer MY_TOKEN"
pygame.init()
ventana = pygame.display.set_mode(window_resolution,pygame.FULLSCREEN) #screen settings
producto = pygame.image.load("qrcode001.png") #Qr image load
producto = pygame.transform.scale(producto, [640,480]) #Qr size
trials = 0 #sets while loop variable start value
status = "undefined" #defines "status" variable
while status != "closed" and trials<10: #to repeat the loop until "status" value = "closed"
ventana.blit(producto, (448,192)) #QR code position setting
pygame.display.update() #
response = requests.request("GET", url, headers=headers) #makes CURL GET
lag = 0.5 #creates an incremental 0.5 seconds everytime return value is None
sleep(lag) #
json_data = (response.text) #Captures JSON response as text
my_dict = json.loads(json_data) #creates a dictionary with JSON data
if json_data != "": #Checks if json_data is None
status = (my_dict['elements'][0]['status']) #If json_data is not none, asigns 'status' key to "status" variable
else:
lag = lag + 0.5 #increments lag
trials = trials + 1 #increments loop variable
sleep (5) #time to avoid being banned from server.
print (trials)
From your original encountered error, it's not clear what the issue is. The problem is that basically any part of that statement can result in a TypeError being raised as the evaluated part is a None. For example, given my_dict['elements'][0]['status'] this can fail if my_dict is None, or also if my_dict['elements'] is None.
I would try inserting breakpoints to better assist with debugging the cause. another solution that might help would be to wrap each part of the statement in a try-catch block as below:
my_dict = None
try:
elements = my_dict['elements']
except TypeError as e:
print('It possible that my_dict maybe None.')
print('Error:', e)
else:
try:
ele = elements[0]
except TypeError as e:
print('It possible that elements maybe None.')
print('Error:', e)
else:
try:
status = ele['status']
except TypeError as e:
print('It possible that first element maybe None.')
print('Error:', e)
else:
print('Got the status successfully:', status)

When using the frame skipping wrapper for OpenAI Gym, what is the purpose of the np.max line?

I'm implementing the following wrapper used commonly in OpenAI's Gym for Frame Skipping. It can be found in dqn/atari_wrappers.py
I'm very confused about the following line:
max_frame = np.max(np.stack(self._obs_buffer), axis=0)
I have added comments throughout the code for the parts I understand and to aid anyone who may be able to help.
np.stack(self._obs_buffer) stacks the two states in _obs_buffer.
np.max returns the maximum along axis 0.
But what I don't understand is why we're doing this or what it's really doing.
class MaxAndSkipEnv(gym.Wrapper):
"""Return only every 4th frame"""
def __init__(self, env=None, skip=4):
super(MaxAndSkipEnv, self).__init__(env)
# Initialise a double ended queue that can store a maximum of two states
self._obs_buffer = deque(maxlen=2)
# _skip = 4
self._skip = skip
def _step(self, action):
total_reward = 0.0
done = None
for _ in range(self._skip):
# Take a step
obs, reward, done, info = self.env.step(action)
# Append the new state to the double ended queue buffer
self._obs_buffer.append(obs)
# Update the total reward by summing the (reward obtained from the step taken) + (the current
# total reward)
total_reward += reward
# If the game ends, break the for loop
if done:
break
max_frame = np.max(np.stack(self._obs_buffer), axis=0)
return max_frame, total_reward, done, info
At the end of the for loop the self._obs_buffer holds the last two frames.
Those two frames are then max-pooled over, resulting in an observation, that contains some temporal information.

Collecting Tweets using max_id is not working as expected

Iam currently doing a tweet search using Twitter Api. However, taking the tweet id is not working for me.
Here is my code:
searchQuery = '#BLM' # this is what we're searching for
searchQuery = searchQuery + "-filter:retweets"
Geocode="39.8, -95.583068847656, 2500km"
maxTweets = 1000000 # Some arbitrary large number
tweetsPerQry = 100 # this is the max the API permits
fName = 'tweetsBLM.json' # We'll store the tweets in a json file.
sinceId = None
#max_id = -1 # initial search
max_id=1278836959926980609 # the last id of previous search
tweetCount = 0
print("Downloading max {0} tweets".format(maxTweets))
with open(fName, 'w') as f:
while tweetCount < maxTweets:
try:
if (max_id <= 0):
if (not sinceId):
new_tweets = api.search(q=searchQuery,lang="en", geocode=Geocode,
count=tweetsPerQry)
else:
new_tweets = api.search(q=searchQuery,lang="en",geocode=Geocode,
count=tweetsPerQry,
since_id=sinceId )
else:
if (not sinceId):
new_tweets = api.search(q=searchQuery, lang="en", geocode=Geocode,
count=tweetsPerQry,
max_id=str(max_id - 1) )
else:
new_tweets = api.search(q=searchQuery, lang="en", geocode=Geocode,
count=tweetsPerQry,
max_id=str(max_id - 1),
since_id=sinceId)
if not new_tweets:
print("No more tweets found")
break
for tweet in new_tweets:
f.write(jsonpickle.encode(tweet._json, unpicklable=False) +
'\n')
tweetCount += len(new_tweets)
print("Downloaded {0} tweets".format(tweetCount))
max_id = new_tweets[-1].id
except tweepy.TweepError as e:
# Just exit if any error
print("some error : " + str(e))
print('exception raised, waiting 15 minutes')
print('(until:', dt.datetime.now() + dt.timedelta(minutes=15), ')')
time.sleep(15*60)
break
print ("Downloaded {0} tweets, Saved to {1}".format(tweetCount, fName))
This code works perfectly fine. I initially run it and got about 40 000 tweets. Then i took the id of the last tweet of previous/initial search to go back in time. However, i was disappointed to see that there were no tweets anymore. I can not believe that for a second. I must be going wrong somewhere because #BLM has been very active in the last 2/3 months.
Any help is very welcome. Thank you
I may have found the answer. Using Twitter API, it is not possible to get older tweets (7 days old or more). Using max_id to get around this is not possible either.
The only way is to stream and wait for more than 7 days.
Finally, there is also this link that look for older tweets
https://pypi.org/project/GetOldTweets3/ it is an extension of the original Jefferson Henrique's work

World of tanks Python list comparison from json

ok I am trying to create a definition which will read a list of IDS from an external Json file, Which it is doing. Its even putting the data into the database on load of the program, my issue is this. I cant seem to match the list IDs to a comparison. Here is my current code:
def check(account):
global ID_account
import json, httplib
if not hasattr(BigWorld, 'iddata'):
UID_DB = account['databaseID']
UID = ID_account
try:
conn = httplib.HTTPConnection('URL')
conn.request('GET', '/ids.json')
conn.sock.settimeout(2)
resp = conn.getresponse()
qresp = resp.read()
BigWorld.iddata = json.loads(qresp)
LOG_NOTE('[ABRO] Request of URL data successful.')
conn.close()
except:
LOG_NOTE('[ABRO] Http request to URL problem. Loading local data.')
if UID_DB is not None:
list = BigWorld.iddata["ids"]
#print (len(list) - 1)
for n in range(0, (len(list) - 1)):
#print UID_DB
#print list[n]
if UID_DB == list[n]:
#print '[ABRO] userid located:'
#print UID_DB
UID = UID_DB
else:
LOG_NOTE('[ABRO] userid not set.')
if 'databaseID' in account and account['databaseID'] != UID:
print '[ABRO] Account not active in database, game closing...... '
BigWorld.quit()
now my json file looks like this:
{
"ids":[
"1001583757",
"500687699",
"000000000"
]
}
now when I run this with all the commented out prints it seems to execute perfectly fine up till it tries to do the match inside the for loop. Even when the print shows UID_DB and list[n] being the same values, it does not set my variable, it doesn't post any errors, its just simply acting as if there was no match. am I possibly missing a loop break? here is the python log starting with the print of the length of the table print:
INFO: 2
INFO: 1001583757
INFO: 1001583757
INFO: 1001583757
INFO: 500687699
INFO: [ABRO] Account not active, game closing......
as you can see from the log, its never printing the User located print, so it is not matching them. its just continuing with the loop and using the default ID I defined above the definition. Anyone with an idea would definitely help me out as ive been poking and prodding this thing for 3 days now.
the answer to this was found by #VikasNehaOjha it was missing simply a conversion to match types before the match comparison I did this by adding in
list[n] = int(list[n])
that resolved my issue and it finally matched comparisons.