Angr for a HTB challenge, no solution found - reverse-engineering

I'm new to RE. I'm trying to solve a HackTheBox challenge called RAuth, with angr. I understand how to analyze and solve this challenge differently, without angr, but I really want to understand what is wrong with my angr script, or maybe what is the reason why angr is not feasible for this (and similar) case?
The application is easy, after it starts it's requesting the password from stdin, and exits if the password is incorrect:
>:~/challenges/rauth$ ./rauth
Welcome to secure login portal!
Enter the password to access the system:
wqeqwwqewqewqeqwewqeqweqweqweqweqwewqeqwe
You entered a wrong password!
When I run my script it works for around 15 minutes and dies with one of two errors:
"Killed"
"IndexError: tuple index out of range"
My angr script:
#!/usr/bin/env python
#coding: utf-8
import angr
import claripy
import time
import sys
def is_successful(st):
output = st.posix.dumps(sys.stdout.fileno())
if b'Successfully Authenticated' in output:
return True
return False
def should_abort(state):
output = state.posix.dumps(sys.stdout.fileno())
return b'You entered a wrong password!' in output
def main(round):
print("Checking "+str(round))
p = angr.Project('rauth',auto_load_libs=False)
flag_chars = [claripy.BVS('flag_%d' % i, 8) for i in range(round)]
flag = claripy.Concat(*flag_chars + [claripy.BVV(b'\n')])
st = p.factory.full_init_state(
args=['./rauth'],
add_options=angr.options.unicorn,
stdin=angr.SimPackets(name='stdin', content=[(flag, 32)])
#stdin=flag,
)
st.options.add(angr.options.SYMBOL_FILL_UNCONSTRAINED_MEMORY)
st.options.add(angr.options.SYMBOL_FILL_UNCONSTRAINED_REGISTERS)
for k in flag_chars:
st.solver.add(k >= ord(" "))
st.solver.add(k <= ord("~"))
sm = p.factory.simulation_manager(st)
#sm.explore(find=is_successful,avoid=should_abort)
sm.explore(find=lambda s: b"Successfully" in s.posix.dumps(1),avoid=lambda s: b"wrong" in s.posix.dumps(1))
if len(sm.found) > 0:
for solution_state in ex.found:
print("[>>] {!r}".format(solution_state.solver.eval(flag,cast_to=bytes)))
else:
print("[>>] no solution found :(")
if __name__ == "__main__":
print(main(32))
Am I using angr for the case when it's not applicable? What am I missing?
A also tried to play with options, like removing unicorn, enabling auto_load_libs, using or disabling lambdas etc. No success.

Related

Why am I not being able of detecting a None value from a dictionary

I have seen this issue many times happening to many people (here). I am still struggling trying to validate whether what my dictionary captures from a JSON is "None" or not but I still get the following error.
This code is supposed to call a CURL looking for the 'closed' value in the 'status' key until it finds it (or 10 times). When payment is done by means of a QR code, status changes from opened to closed.
status = (my_dict['elements'][0]['status'])
TypeError: 'NoneType' object is not subscriptable
Any clue of what am I doing wrong and how can I fix it?
Also, if I run the part of the script that calls the JSON standalone, it executes smoothly everytime. Is it anything in the code that could be affecting the CURL execution?
By the way, I have started programming 1 week ago so please excuse me if I mix concepts or say something that lacks of common sense.
I have tried to validate the IF with "is not" instead of "!=" and also with "None" instead of "".
def show_qr():
reference_json = reference.replace(' ','%20') #replaces "space" with %20 for CURL assembly
url = "https://api.mercadopago.com/merchant_orders?external_reference=" + reference_json #CURL URL concatenate
headers = CaseInsensitiveDict()
headers["Authorization"] = "Bearer MY_TOKEN"
pygame.init()
ventana = pygame.display.set_mode(window_resolution,pygame.FULLSCREEN) #screen settings
producto = pygame.image.load("qrcode001.png") #Qr image load
producto = pygame.transform.scale(producto, [640,480]) #Qr size
trials = 0 #sets while loop variable start value
status = "undefined" #defines "status" variable
while status != "closed" and trials<10: #to repeat the loop until "status" value = "closed"
ventana.blit(producto, (448,192)) #QR code position setting
pygame.display.update() #
response = requests.request("GET", url, headers=headers) #makes CURL GET
lag = 0.5 #creates an incremental 0.5 seconds everytime return value is None
sleep(lag) #
json_data = (response.text) #Captures JSON response as text
my_dict = json.loads(json_data) #creates a dictionary with JSON data
if json_data != "": #Checks if json_data is None
status = (my_dict['elements'][0]['status']) #If json_data is not none, asigns 'status' key to "status" variable
else:
lag = lag + 0.5 #increments lag
trials = trials + 1 #increments loop variable
sleep (5) #time to avoid being banned from server.
print (trials)
From your original encountered error, it's not clear what the issue is. The problem is that basically any part of that statement can result in a TypeError being raised as the evaluated part is a None. For example, given my_dict['elements'][0]['status'] this can fail if my_dict is None, or also if my_dict['elements'] is None.
I would try inserting breakpoints to better assist with debugging the cause. another solution that might help would be to wrap each part of the statement in a try-catch block as below:
my_dict = None
try:
elements = my_dict['elements']
except TypeError as e:
print('It possible that my_dict maybe None.')
print('Error:', e)
else:
try:
ele = elements[0]
except TypeError as e:
print('It possible that elements maybe None.')
print('Error:', e)
else:
try:
status = ele['status']
except TypeError as e:
print('It possible that first element maybe None.')
print('Error:', e)
else:
print('Got the status successfully:', status)

Updating one table of MYSQL with multiple processes via pymysql

Actually, I am trying to update one table with multiple processes via pymysql, and each process reads a CSV file split from a huge one in order to promote the speed. But I get the Lock wait timeout exceeded; try restarting transaction exception when I run the script. After searching the posts on this site, I found one post which mentioned that to set or build the built-in LOAD_DATA_INFILE, but no details on it. How can I do it with 'pymysql' to reach my aim?
---------------------------first edit----------------------------------------
Here's the job method:
`def importprogram(path, name):
begin = time.time()
print('begin to import program' + name + ' info.')
# "c:\\sometest.csv"
file = open(path, mode='rb')
csvfile = csv.reader(codecs.iterdecode(file, 'utf-8'))
connection = None
try:
connection = pymysql.connect(host='a host', user='someuser', password='somepsd', db='mydb',
cursorclass=pymysql.cursors.DictCursor)
count = 1
with connection.cursor() as cursor:
sql = '''update sometable set Acolumn='{guid}' where someid='{pid}';'''
next(csvfile, None)
for line in csvfile:
try:
count = count + 1
if ''.join(line).strip():
command = sql.format(guid=line[2], pid=line[1])
cursor.execute(command)
if count % 1000 == 0:
print('program' + name + ' cursor execute', count)
except csv.Error:
print('program csv.Error:', count)
continue
except IndexError:
print('program IndexError:', count)
continue
except StopIteration:
break
except Exception as e:
print('program' + name, str(e))
finally:
connection.commit()
connection.close()
file.close()
print('program' + name + ' info done.time cost:', time.time()-begin)`
And the multi-processing method:
import multiprocessing as mp
def multiproccess():
pool = mp.Pool(3)
results = []
paths = ['C:\\testfile01.csv', 'C:\\testfile02.csv', 'C:\\testfile03.csv']
name = 1
for path in paths:
results.append(pool.apply_async(importprogram, args=(path, str(name))))
name = name + 1
print(result.get() for result in results)
pool.close()
pool.join()
And the main method:
if __name__ == '__main__':
multiproccess()
I am new to Python. How can I make the code or the way itself goes wrong? Should I use only one single process to finish the data reading and importing?
Your issue is that you are exceeding the time allowed for a response to be fetched from the server, so the client is automatically timing out.
In my experience, adjust the wait timeout to something like 6000 seconds, combine into one CSV and just leave the data to import. Also, I would recommend running the query direct from MySQL rather than Python.
The way I usually import CSV data from Python to MySQL is through the INSERT ... VALUES ... method, and I only do so when some kind of manipulation of the data is required (i.e. inserting different rows into different tables).
I like your approach and understand your thinking but in reality there is no need. The benefit to the INSERT ... VALUES ... method is that you won't run into any timeout issue.

failing to decrypt blob passwords only once in a while using amazon kms

import os, sys
AWS_DIRECTORY = '/home/jenkins/.aws'
certificates_folder = 'my_folder'
SUCCESS = 'success'
class AmazonKMS(object):
def __init__(self):
# making sure boto3 has the certificates and region files
result = os.system('mkdir -p ' + AWS_DIRECTORY)
self._check_os_result(result)
result = os.system('cp ' + certificates_folder + 'kms_config ' + AWS_DIRECTORY + '/config')
self._check_os_result(result)
result = os.system('cp ' + certificates_folder + 'kms_credentials ' + AWS_DIRECTORY + '/credentials')
self._check_os_result(result)
# boto3 is the amazon client package
import boto3
self.kms_client = boto3.client('kms', region_name='us-east-1')
self.global_key_alias = 'alias/global'
self.global_key_id = None
def _check_os_result(self, result):
if result != 0 and raise_on_copy_error:
raise FAILED_COPY
def decrypt_text(self, encrypted_text):
response = self.kms_client.decrypt(
CiphertextBlob = encrypted_text
)
return response['Plaintext']
when using it
amazon_kms = AmazonKMS()
amazon_kms.decrypt_text(blob_password)
getting
E ClientError: An error occurred (AccessDeniedException) when calling the Decrypt operation: The ciphertext refers to a customer master key that does not exist, does not exist in this region, or you are not allowed to access.
stacktrace is
../keys_management/amazon_kms.py:77: in decrypt_text
CiphertextBlob = encrypted_text
/home/jenkins/.virtualenvs/global_tests/local/lib/python2.7/site-packages/botocore/client.py:253: in _api_call
return self._make_api_call(operation_name, kwargs)
/home/jenkins/.virtualenvs/global_tests/local/lib/python2.7/site-packages/botocore/client.py:557: in _make_api_call
raise error_class(parsed_response, operation_name)
This happens in a script that runs once an hour.
it's only failing 2 -3 times a day.
after a retry it succeed.
Tried to upgraded from boto3 1.2.3 to 1.4.4
what is the possible cause for this behavior ?
My guess is that the issue is not in anything you described here.
Most likely the login-tokes time out or something along those lines. To investigate this further a closer look on the way the login works here is probably helpful.
How does this code run? Is it running inside AWS like on Lambda or EC2? Do you run it from your own server (looks like it runs on jenkins)? How is the login access established? What are those kms_credentials used for and how do they look like? Do you do something like assumeing a role (which would probably work through access tokens which after some time will no longer work)?

World of tanks Python list comparison from json

ok I am trying to create a definition which will read a list of IDS from an external Json file, Which it is doing. Its even putting the data into the database on load of the program, my issue is this. I cant seem to match the list IDs to a comparison. Here is my current code:
def check(account):
global ID_account
import json, httplib
if not hasattr(BigWorld, 'iddata'):
UID_DB = account['databaseID']
UID = ID_account
try:
conn = httplib.HTTPConnection('URL')
conn.request('GET', '/ids.json')
conn.sock.settimeout(2)
resp = conn.getresponse()
qresp = resp.read()
BigWorld.iddata = json.loads(qresp)
LOG_NOTE('[ABRO] Request of URL data successful.')
conn.close()
except:
LOG_NOTE('[ABRO] Http request to URL problem. Loading local data.')
if UID_DB is not None:
list = BigWorld.iddata["ids"]
#print (len(list) - 1)
for n in range(0, (len(list) - 1)):
#print UID_DB
#print list[n]
if UID_DB == list[n]:
#print '[ABRO] userid located:'
#print UID_DB
UID = UID_DB
else:
LOG_NOTE('[ABRO] userid not set.')
if 'databaseID' in account and account['databaseID'] != UID:
print '[ABRO] Account not active in database, game closing...... '
BigWorld.quit()
now my json file looks like this:
{
"ids":[
"1001583757",
"500687699",
"000000000"
]
}
now when I run this with all the commented out prints it seems to execute perfectly fine up till it tries to do the match inside the for loop. Even when the print shows UID_DB and list[n] being the same values, it does not set my variable, it doesn't post any errors, its just simply acting as if there was no match. am I possibly missing a loop break? here is the python log starting with the print of the length of the table print:
INFO: 2
INFO: 1001583757
INFO: 1001583757
INFO: 1001583757
INFO: 500687699
INFO: [ABRO] Account not active, game closing......
as you can see from the log, its never printing the User located print, so it is not matching them. its just continuing with the loop and using the default ID I defined above the definition. Anyone with an idea would definitely help me out as ive been poking and prodding this thing for 3 days now.
the answer to this was found by #VikasNehaOjha it was missing simply a conversion to match types before the match comparison I did this by adding in
list[n] = int(list[n])
that resolved my issue and it finally matched comparisons.

Is there a tool to check database integrity in Django?

The MySQL database powering our Django site has developed some integrity problems; e.g. foreign keys that refer to nonexistent rows. I won't go into how we got into this mess, but I'm now looking at how to fix it.
Basically, I'm looking for a script that scans all models in the Django site, and checks whether all foreign keys and other constraints are correct. Hopefully, the number of problems will be small enough so they can be fixed by hand.
I could code this up myself but I'm hoping that somebody here has a better idea.
I found django-check-constraints but it doesn't quite fit the bill: right now, I don't need something to prevent these problems, but to find them so they can be fixed manually before taking other steps.
Other constraints:
Django 1.1.1 and upgrading has been determined to break things
MySQL 5.0.51 (Debian Lenny), currently with MyISAM tables
Python 2.5, might be upgradable but I'd rather not right now
(Later, we will convert to InnoDB for proper transaction support, and maybe foreign key constraints on the database level, to prevent similar problems in the future. But that's not the topic of this question.)
I whipped up something myself. The management script below should be saved in myapp/management/commands/checkdb.py. Make sure that intermediate directories have an __init__.py file.
Usage: ./manage.py checkdb for a full check; use --exclude app.Model or -e app.Model to exclude the model Model in the app app.
from django.core.management.base import BaseCommand, CommandError
from django.core.management.base import NoArgsCommand
from django.core.exceptions import ObjectDoesNotExist
from django.db import models
from optparse import make_option
from lib.progress import with_progress_meter
def model_name(model):
return '%s.%s' % (model._meta.app_label, model._meta.object_name)
class Command(BaseCommand):
args = '[-e|--exclude app_name.ModelName]'
help = 'Checks constraints in the database and reports violations on stdout'
option_list = NoArgsCommand.option_list + (
make_option('-e', '--exclude', action='append', type='string', dest='exclude'),
)
def handle(self, *args, **options):
# TODO once we're on Django 1.2, write to self.stdout and self.stderr instead of plain print
exclude = options.get('exclude', None) or []
failed_instance_count = 0
failed_model_count = 0
for app in models.get_apps():
for model in models.get_models(app):
if model_name(model) in exclude:
print 'Skipping model %s' % model_name(model)
continue
fail_count = self.check_model(app, model)
if fail_count > 0:
failed_model_count += 1
failed_instance_count += fail_count
print 'Detected %d errors in %d models' % (failed_instance_count, failed_model_count)
def check_model(self, app, model):
meta = model._meta
if meta.proxy:
print 'WARNING: proxy models not currently supported; ignored'
return
# Define all the checks we can do; they return True if they are ok,
# False if not (and print a message to stdout)
def check_foreign_key(model, field):
foreign_model = field.related.parent_model
def check_instance(instance):
try:
# name: name of the attribute containing the model instance (e.g. 'user')
# attname: name of the attribute containing the id (e.g. 'user_id')
getattr(instance, field.name)
return True
except ObjectDoesNotExist:
print '%s with pk %s refers via field %s to nonexistent %s with pk %s' % \
(model_name(model), str(instance.pk), field.name, model_name(foreign_model), getattr(instance, field.attname))
return check_instance
# Make a list of checks to run on each model instance
checks = []
for field in meta.local_fields + meta.local_many_to_many + meta.virtual_fields:
if isinstance(field, models.ForeignKey):
checks.append(check_foreign_key(model, field))
# Run all checks
fail_count = 0
if checks:
for instance in with_progress_meter(model.objects.all(), model.objects.count(), 'Checking model %s ...' % model_name(model)):
for check in checks:
if not check(instance):
fail_count += 1
return fail_count
I'm making this a community wiki because I welcome any and all improvements to my code!
Thomas' answer is great but is now a bit out of date.
I have updated it as a gist to support Django 1.8+.