ReadProcessMemory() dont read pages with specific AllocationProtect values - ctypes

I'm building a Memory Scanner and with some error handling I noticed that ReadProcessMemory() is reading 90% of process' pages, but the ones that have mbi.Protect value == 1 or 260 it fails and returns ERROR 299 (Partial Copy) and the output of BytesRead is 0.
I run it as admin, set debug privileges and open process with VM_READ, but these exactly pages with mbi.Protect == 260 and 1 are unreadable. So, it's normal that it cant read all pages or am I doing something wrong ? Here is the code: (to be reproducable it also need this part of code that I import to main code and its where I setup all the ctypes background: https://pastebin.com/hMxLej5k, then you open python, import the code below and write "main(pid)" where pid is the pid of the process you want to read).
from ctypes import *
from ctypes import wintypes
import win32security
from setup_apis import *
def setDebugPriv():
token_handle = wintypes.HANDLE()
if not OpenProcessToken(
GetCurrentProcess(),
TOKEN_ADJUST_PRIVILEGES | TOKEN_QUERY,
byref(token_handle),
):
print("Error:",kernel32.GetLastError())
return False
luidvalue = win32security.LookupPrivilegeValue ( None, win32security.SE_DEBUG_NAME )
if not win32security.LookupPrivilegeValue(
None,
win32security.SE_DEBUG_NAME ,
):
print("Error",kernel32.GetLastError())
return False
se_debug_name_value = LUID(luidvalue) # Valor local do Privilégio de Debug
LAA = LUID_AND_ATTRIBUTES (
se_debug_name_value,
SE_PRIVILEGE_ENABLED
)
tkp = TOKEN_PRIVILEGES (
1, # DWORD PrivilegeCount
LAA, # LUID_AND_ATTRIBUTES
)
if not AdjustTokenPrivileges(
token_handle,
False,
byref(tkp),
sizeof(tkp),
None,
None,
):
print("Error:",GetLastError)
CloseHandle(token_handle)
return False
return True
#################################
def main(pid=None):
setDebugPriv()
process = OpenProcess (
PROCESS_VM_READ|PROCESS_QUERY_INFORMATION,
False,
pid,
)
system_info = SYSTEM_INFO()
GetSystemInfo ( byref(system_info) )
MaxAppAdress = system_info.lpMaximumApplicationAdress
VirtualQueryEx = VirtualQueryEx64
mbi = MEMORY_BASIC_INFORMATION64()
memset (
byref(mbi),
0,
sizeof(mbi),
)
Adress = 0
BytesRead = c_size_t (0)
while MaxAppAdress > Adress:
VirtualQueryEx(
process,
Adress,
byref(mbi),
sizeof(mbi),
)
if mbi.State == MEM_COMMIT:
try:
ContentsBuffer = create_string_buffer(mbi.RegionSize)
except:
pass
if not ReadProcessMemory (
process,
Adress,
ContentsBuffer,
mbi.RegionSize,
byref(BytesRead),
):
print("Cant Read, Error: %i, Protect State: %i" %(kernel32.GetLastError(), mbi.Protect) )
print("BytesRead:", BytesRead)
Adress += mbi.RegionSize
continue
Adress += mbi.RegionSize
'''

See Memory Protection Constants (260 = 0x104). No access and page guard regions cause exceptions. You can't access a no_access and you don't want to fire page_guard exceptions as they are meant to warn a process that a stack needs to grow and commit more pages. Don't attempt to read them.
Constant
Value
Description
PAGE_NOACCESS
0x01
Disables all access to the committed region of pages. An attempt to read from, write to, or execute the committed region results in an access violation.This flag is not supported by the CreateFileMapping function.
PAGE_READWRITE
0x04
Enables read-only or read/write access to the committed region of pages. If Data Execution Prevention is enabled, attempting to execute code in the committed region results in an access violation.
PAGE_GUARD
0x100
Pages in the region become guard pages. Any attempt to access a guard page causes the system to raise a STATUS_GUARD_PAGE_VIOLATION exception and turn off the guard page status. Guard pages thus act as a one-time access alarm. For more information, see Creating Guard Pages.When an access attempt leads the system to turn off guard page status, the underlying page protection takes over.If a guard page exception occurs during a system service, the service typically returns a failure status indicator.This value cannot be used with PAGE_NOACCESS.This flag is not supported by the CreateFileMapping function.

Related

Why am I not being able of detecting a None value from a dictionary

I have seen this issue many times happening to many people (here). I am still struggling trying to validate whether what my dictionary captures from a JSON is "None" or not but I still get the following error.
This code is supposed to call a CURL looking for the 'closed' value in the 'status' key until it finds it (or 10 times). When payment is done by means of a QR code, status changes from opened to closed.
status = (my_dict['elements'][0]['status'])
TypeError: 'NoneType' object is not subscriptable
Any clue of what am I doing wrong and how can I fix it?
Also, if I run the part of the script that calls the JSON standalone, it executes smoothly everytime. Is it anything in the code that could be affecting the CURL execution?
By the way, I have started programming 1 week ago so please excuse me if I mix concepts or say something that lacks of common sense.
I have tried to validate the IF with "is not" instead of "!=" and also with "None" instead of "".
def show_qr():
reference_json = reference.replace(' ','%20') #replaces "space" with %20 for CURL assembly
url = "https://api.mercadopago.com/merchant_orders?external_reference=" + reference_json #CURL URL concatenate
headers = CaseInsensitiveDict()
headers["Authorization"] = "Bearer MY_TOKEN"
pygame.init()
ventana = pygame.display.set_mode(window_resolution,pygame.FULLSCREEN) #screen settings
producto = pygame.image.load("qrcode001.png") #Qr image load
producto = pygame.transform.scale(producto, [640,480]) #Qr size
trials = 0 #sets while loop variable start value
status = "undefined" #defines "status" variable
while status != "closed" and trials<10: #to repeat the loop until "status" value = "closed"
ventana.blit(producto, (448,192)) #QR code position setting
pygame.display.update() #
response = requests.request("GET", url, headers=headers) #makes CURL GET
lag = 0.5 #creates an incremental 0.5 seconds everytime return value is None
sleep(lag) #
json_data = (response.text) #Captures JSON response as text
my_dict = json.loads(json_data) #creates a dictionary with JSON data
if json_data != "": #Checks if json_data is None
status = (my_dict['elements'][0]['status']) #If json_data is not none, asigns 'status' key to "status" variable
else:
lag = lag + 0.5 #increments lag
trials = trials + 1 #increments loop variable
sleep (5) #time to avoid being banned from server.
print (trials)
From your original encountered error, it's not clear what the issue is. The problem is that basically any part of that statement can result in a TypeError being raised as the evaluated part is a None. For example, given my_dict['elements'][0]['status'] this can fail if my_dict is None, or also if my_dict['elements'] is None.
I would try inserting breakpoints to better assist with debugging the cause. another solution that might help would be to wrap each part of the statement in a try-catch block as below:
my_dict = None
try:
elements = my_dict['elements']
except TypeError as e:
print('It possible that my_dict maybe None.')
print('Error:', e)
else:
try:
ele = elements[0]
except TypeError as e:
print('It possible that elements maybe None.')
print('Error:', e)
else:
try:
status = ele['status']
except TypeError as e:
print('It possible that first element maybe None.')
print('Error:', e)
else:
print('Got the status successfully:', status)

Connection issues in Storage trigger GCF

For my application, new file uploaded to storage is read and the data is added to a main file. The new file contains 2 lines, one a header and other an array whose values are separated by a comma. The main file will need maximum of 265MB. The new files will have maximum of 30MB.
def write_append_to_ecg_file(filename,ecg,patientdata):
file1 = open('/tmp/'+ filename,"w+")
file1.write(":".join(patientdata))
file1.write('\n')
file1.write(",".join(ecg.astype(str)))
file1.close()
def storage_trigger_function(data,context):
#Download the segment file
download_files_storage(bucket_name,new_file_name,storage_folder_name = blob_path)
#Read the segment file
data_from_new_file,meta = read_new_file(new_file_name, scale=1, fs=125, include_meta=True)
print("Length of ECG data from segment {} file {}".format(segment_no,len(data_from_new_file)))
os.remove(new_file_name)
#Check if the main ecg_file_exists
file_exists = blob_exists(bucket_name, blob_with_the_main_file)
print("File status {}".format(file_exists))
data_from_main_file = []
if ecg_file_exists:
download_files_storage(bucket_name,main_file_name,storage_folder_name = blob_with_the_main_file)
data_from_main_file,meta = read_new_file(main_file_name, scale=1, fs=125, include_meta=True)
print("ECG data from main file {}".format(len(data_from_main_file)))
os.remove(main_file_name)
data_from_main_file = np.append(data_from_main_file,data_from_new_file)
print("data after appending {}".format(len(data_from_main_file)))
write_append_to_ecg_file(main_file,data_from_main_file,meta)
token = upload_files_storage(bucket_name,main_file,storage_folder_name = main_file_blob,upload_file = True)
else:
write_append_to_ecg_file(main_file,data_from_new_file,meta)
token = upload_files_storage(bucket_name,main_file,storage_folder_name = main_file_blob,upload_file = True)
The GCF is deployed
gcloud functions deploy storage_trigger_function --runtime python37 --trigger-resource patch-us.appspot.com --trigger-event google.storage.object.finalize --timeout 540s --memory 8192MB
For the first file, I was able to read the file and write the data to the main file. But after uploading the 2nd file, its giving Function execution took 70448 ms, finished with status: 'connection error' On uploading the 3rd file, it gives the Function invocation was interrupted. Error: memory limit exceeded. Despite of deploying the function with 8192MB memory, I am getting this error. Can I get some help on this.

Updating one table of MYSQL with multiple processes via pymysql

Actually, I am trying to update one table with multiple processes via pymysql, and each process reads a CSV file split from a huge one in order to promote the speed. But I get the Lock wait timeout exceeded; try restarting transaction exception when I run the script. After searching the posts on this site, I found one post which mentioned that to set or build the built-in LOAD_DATA_INFILE, but no details on it. How can I do it with 'pymysql' to reach my aim?
---------------------------first edit----------------------------------------
Here's the job method:
`def importprogram(path, name):
begin = time.time()
print('begin to import program' + name + ' info.')
# "c:\\sometest.csv"
file = open(path, mode='rb')
csvfile = csv.reader(codecs.iterdecode(file, 'utf-8'))
connection = None
try:
connection = pymysql.connect(host='a host', user='someuser', password='somepsd', db='mydb',
cursorclass=pymysql.cursors.DictCursor)
count = 1
with connection.cursor() as cursor:
sql = '''update sometable set Acolumn='{guid}' where someid='{pid}';'''
next(csvfile, None)
for line in csvfile:
try:
count = count + 1
if ''.join(line).strip():
command = sql.format(guid=line[2], pid=line[1])
cursor.execute(command)
if count % 1000 == 0:
print('program' + name + ' cursor execute', count)
except csv.Error:
print('program csv.Error:', count)
continue
except IndexError:
print('program IndexError:', count)
continue
except StopIteration:
break
except Exception as e:
print('program' + name, str(e))
finally:
connection.commit()
connection.close()
file.close()
print('program' + name + ' info done.time cost:', time.time()-begin)`
And the multi-processing method:
import multiprocessing as mp
def multiproccess():
pool = mp.Pool(3)
results = []
paths = ['C:\\testfile01.csv', 'C:\\testfile02.csv', 'C:\\testfile03.csv']
name = 1
for path in paths:
results.append(pool.apply_async(importprogram, args=(path, str(name))))
name = name + 1
print(result.get() for result in results)
pool.close()
pool.join()
And the main method:
if __name__ == '__main__':
multiproccess()
I am new to Python. How can I make the code or the way itself goes wrong? Should I use only one single process to finish the data reading and importing?
Your issue is that you are exceeding the time allowed for a response to be fetched from the server, so the client is automatically timing out.
In my experience, adjust the wait timeout to something like 6000 seconds, combine into one CSV and just leave the data to import. Also, I would recommend running the query direct from MySQL rather than Python.
The way I usually import CSV data from Python to MySQL is through the INSERT ... VALUES ... method, and I only do so when some kind of manipulation of the data is required (i.e. inserting different rows into different tables).
I like your approach and understand your thinking but in reality there is no need. The benefit to the INSERT ... VALUES ... method is that you won't run into any timeout issue.

failing to decrypt blob passwords only once in a while using amazon kms

import os, sys
AWS_DIRECTORY = '/home/jenkins/.aws'
certificates_folder = 'my_folder'
SUCCESS = 'success'
class AmazonKMS(object):
def __init__(self):
# making sure boto3 has the certificates and region files
result = os.system('mkdir -p ' + AWS_DIRECTORY)
self._check_os_result(result)
result = os.system('cp ' + certificates_folder + 'kms_config ' + AWS_DIRECTORY + '/config')
self._check_os_result(result)
result = os.system('cp ' + certificates_folder + 'kms_credentials ' + AWS_DIRECTORY + '/credentials')
self._check_os_result(result)
# boto3 is the amazon client package
import boto3
self.kms_client = boto3.client('kms', region_name='us-east-1')
self.global_key_alias = 'alias/global'
self.global_key_id = None
def _check_os_result(self, result):
if result != 0 and raise_on_copy_error:
raise FAILED_COPY
def decrypt_text(self, encrypted_text):
response = self.kms_client.decrypt(
CiphertextBlob = encrypted_text
)
return response['Plaintext']
when using it
amazon_kms = AmazonKMS()
amazon_kms.decrypt_text(blob_password)
getting
E ClientError: An error occurred (AccessDeniedException) when calling the Decrypt operation: The ciphertext refers to a customer master key that does not exist, does not exist in this region, or you are not allowed to access.
stacktrace is
../keys_management/amazon_kms.py:77: in decrypt_text
CiphertextBlob = encrypted_text
/home/jenkins/.virtualenvs/global_tests/local/lib/python2.7/site-packages/botocore/client.py:253: in _api_call
return self._make_api_call(operation_name, kwargs)
/home/jenkins/.virtualenvs/global_tests/local/lib/python2.7/site-packages/botocore/client.py:557: in _make_api_call
raise error_class(parsed_response, operation_name)
This happens in a script that runs once an hour.
it's only failing 2 -3 times a day.
after a retry it succeed.
Tried to upgraded from boto3 1.2.3 to 1.4.4
what is the possible cause for this behavior ?
My guess is that the issue is not in anything you described here.
Most likely the login-tokes time out or something along those lines. To investigate this further a closer look on the way the login works here is probably helpful.
How does this code run? Is it running inside AWS like on Lambda or EC2? Do you run it from your own server (looks like it runs on jenkins)? How is the login access established? What are those kms_credentials used for and how do they look like? Do you do something like assumeing a role (which would probably work through access tokens which after some time will no longer work)?

Using the LDAvis package in R to create a gist file of the result

I'm using LDAvis for topic modeling and trying to use the as.gist option to create a gist. When serVis executes there is a timeout in curl::curl_fetch_memory after about 10 seconds. If I immediately execute serVis again I get a different error 'Problems parsing JSON' and from then on whenever serVis is run that same error recurs.
If I start all over with a fresh workspace the same behavior occurs. The first time serVis is run, curl::curl_fetch_memory times out after about 10 seconds. Subsequent executions return 'Problems parsing JSON'.
If I don't use the as.gist option it works fine, but of course doesn't create a gist.
Very rarely, it works and a gist is created. If I change parameters to reduce the size of the JSON object it usually works, which makes me think it may be related to size.
I have explored the various RCurlOptions timeout settings. Currently, they are set as
options(RCurlOptions = list(cainfo = system.file("CurlSSL", "cacert.pem",
package = "RCurl"),
connecttimeout = 300, timeout = 3000,
followlocation = TRUE, dns.cache.timeout = 300))
Below is a console listing with debug set on curl::curl_fetch_memory.
> json <- createJSON(phi = cases$phi,
+ theta = cases$theta,
+ doc.len .... [TRUNCATED]
> serVis(json, open.browser = TRUE, as.gist = TRUE, description = 'APM Community')
debugging in: curl::curl_fetch_memory(url, handle = handle)
debug: {
output <- .Call(R_curl_fetch_memory, url, handle)
res <- handle_response_data(handle)
res$content <- output
res
}
Browse[2]> output <- .Call(R_curl_fetch_memory, url, handle)
Error: Timeout was reached
Browse[2]> output <- .Call(R_curl_fetch_memory, url, handle)
Browse[2]> rawToChar(output)
[1] "{\"message\":\"Problems parsing JSON\",\"documentation_url\":\"https://developer.github.com/v3\"}"
Browse[2]>
.
.
exiting from: curl::curl_fetch_memory(url, handle = handle)
Error: Problems parsing JSON
Any hints on how to debug this problem?