python multiprocessing execute code in child process when termination occurs - exception

I want to know if there's a way to run some code on the child process when the parent process tries to terminate the child process. Is there a way we can write an Exception maybe?
My code looks something like this:
main_process.py
import Process from multiprocessing
def main():
p1 = Process(target = child, args = (arg1, ))
p1.start()
p1.daemon = True
#blah blah blah code here
sleep(5)
p1.terminate()
def child(arg1):
#blah blah blah
itemToSend = {}
#more blah blah
snmpEngine.transportDispatcher.jobStarted(1) # this job would never finish
try:
snmpEngine.transportDispatcher.runDispatcher()
except:
snmpEngine.transportDispatcher.closeDispatcher()
raise
Since the job never finishes, child process keeps running. I have to terminate it from the parent process since child process never terminates on its own. However, I want to send itemToSend to parent process before child process terminates. Can I return it to parent process somehow?
UPDATE: Let me explain how runDispatcher() of pysnmp module works
def runDispatcher():
while jobsArePending(): # jobs are always pending because of jobStarted() function
loop()
def jobStarted(jobId):
if jobId in jobs: #This way there's always 1 job remaining
jobs[jobId] = jobs[jobId] + 1
This is very frustrating. Instead of doing all this, is it possible to write an snmp trap listener on our own? Can you point me to the right resources?

The .runDispatcher() method actually invokes a mainloop of an asynchronous I/O engine (asyncore/twisted) which terminates as soon as no active pysnmp 'jobs' are pending.
You can make pysnmp dispatcher to cooperate with the rest of your app by registering your own callback timer function which will be invoked periodically from mainloop. In your callback function you could check if a termination event arrived and reset pysnmp 'job' what would make pysnmp mainloop to complete.
def timerCb(timeNow):
if terminationRequestedFlag: # this flag is raised by an event from parent process
# use the same jobId as in jobStarted()
snmpEngine.transportDispatcher.jobFinished(1)
snmpEngine.transportDispatcher.registerTimerCbFun(timerCb)
Those pysnmp jobs are just flags (like '1' in your code) that mean to tell I/O core that asynchronous applications still need this I/O core to run and serve them. Once the last of potentially many apps is no more interested in I/O core operation, the mainloop terminates.

If the child process may cooperate then you could use multiprocessing.Event to inform the child that it should exit and multiprocessing.Pipe could be used to send itemToSend to the parent:
#!/usr/bin/env python
import logging
import multiprocessing as mp
from threading import Timer
def child(stopped_event, conn):
while not stopped_event.wait(1):
pass
mp.get_logger().info("sending")
conn.send({'tosend': 'from child'})
conn.close()
def terminate(process, stopped_event, conn):
stopped_event.set() # nudge child process
Timer(5, do_terminate, [process]).start()
try:
print(conn.recv()) # get value from the child
mp.get_logger().info("received")
except EOFError:
mp.get_logger().info("eof")
def do_terminate(process):
if process.is_alive():
mp.get_logger().info("terminating")
process.terminate()
if __name__ == "__main__":
mp.log_to_stderr().setLevel(logging.DEBUG)
parent_conn, child_conn = mp.Pipe(duplex=False)
event = mp.Event()
p = mp.Process(target=child, args=[event, child_conn])
p.start()
child_conn.close() # child must be the only one with it opened
Timer(3, terminate, [p, event, parent_conn]).start()
Output
[DEBUG/MainProcess] created semlock with handle 139845842845696
[DEBUG/MainProcess] created semlock with handle 139845842841600
[DEBUG/MainProcess] created semlock with handle 139845842837504
[DEBUG/MainProcess] created semlock with handle 139845842833408
[DEBUG/MainProcess] created semlock with handle 139845842829312
[INFO/Process-1] child process calling self.run()
[INFO/Process-1] sending
{'tosend': 'from child'}
[INFO/Process-1] process shutting down
[DEBUG/Process-1] running all "atexit" finalizers with priority >= 0
[DEBUG/Process-1] running the remaining "atexit" finalizers
[INFO/MainProcess] received
[INFO/Process-1] process exiting with exitcode 0
[INFO/MainProcess] process shutting down
[DEBUG/MainProcess] running all "atexit" finalizers with priority >= 0
[DEBUG/MainProcess] running the remaining "atexit" finalizers

Related

ReadProcessMemory() dont read pages with specific AllocationProtect values

I'm building a Memory Scanner and with some error handling I noticed that ReadProcessMemory() is reading 90% of process' pages, but the ones that have mbi.Protect value == 1 or 260 it fails and returns ERROR 299 (Partial Copy) and the output of BytesRead is 0.
I run it as admin, set debug privileges and open process with VM_READ, but these exactly pages with mbi.Protect == 260 and 1 are unreadable. So, it's normal that it cant read all pages or am I doing something wrong ? Here is the code: (to be reproducable it also need this part of code that I import to main code and its where I setup all the ctypes background: https://pastebin.com/hMxLej5k, then you open python, import the code below and write "main(pid)" where pid is the pid of the process you want to read).
from ctypes import *
from ctypes import wintypes
import win32security
from setup_apis import *
def setDebugPriv():
token_handle = wintypes.HANDLE()
if not OpenProcessToken(
GetCurrentProcess(),
TOKEN_ADJUST_PRIVILEGES | TOKEN_QUERY,
byref(token_handle),
):
print("Error:",kernel32.GetLastError())
return False
luidvalue = win32security.LookupPrivilegeValue ( None, win32security.SE_DEBUG_NAME )
if not win32security.LookupPrivilegeValue(
None,
win32security.SE_DEBUG_NAME ,
):
print("Error",kernel32.GetLastError())
return False
se_debug_name_value = LUID(luidvalue) # Valor local do Privilégio de Debug
LAA = LUID_AND_ATTRIBUTES (
se_debug_name_value,
SE_PRIVILEGE_ENABLED
)
tkp = TOKEN_PRIVILEGES (
1, # DWORD PrivilegeCount
LAA, # LUID_AND_ATTRIBUTES
)
if not AdjustTokenPrivileges(
token_handle,
False,
byref(tkp),
sizeof(tkp),
None,
None,
):
print("Error:",GetLastError)
CloseHandle(token_handle)
return False
return True
#################################
def main(pid=None):
setDebugPriv()
process = OpenProcess (
PROCESS_VM_READ|PROCESS_QUERY_INFORMATION,
False,
pid,
)
system_info = SYSTEM_INFO()
GetSystemInfo ( byref(system_info) )
MaxAppAdress = system_info.lpMaximumApplicationAdress
VirtualQueryEx = VirtualQueryEx64
mbi = MEMORY_BASIC_INFORMATION64()
memset (
byref(mbi),
0,
sizeof(mbi),
)
Adress = 0
BytesRead = c_size_t (0)
while MaxAppAdress > Adress:
VirtualQueryEx(
process,
Adress,
byref(mbi),
sizeof(mbi),
)
if mbi.State == MEM_COMMIT:
try:
ContentsBuffer = create_string_buffer(mbi.RegionSize)
except:
pass
if not ReadProcessMemory (
process,
Adress,
ContentsBuffer,
mbi.RegionSize,
byref(BytesRead),
):
print("Cant Read, Error: %i, Protect State: %i" %(kernel32.GetLastError(), mbi.Protect) )
print("BytesRead:", BytesRead)
Adress += mbi.RegionSize
continue
Adress += mbi.RegionSize
'''
See Memory Protection Constants (260 = 0x104). No access and page guard regions cause exceptions. You can't access a no_access and you don't want to fire page_guard exceptions as they are meant to warn a process that a stack needs to grow and commit more pages. Don't attempt to read them.
Constant
Value
Description
PAGE_NOACCESS
0x01
Disables all access to the committed region of pages. An attempt to read from, write to, or execute the committed region results in an access violation.This flag is not supported by the CreateFileMapping function.
PAGE_READWRITE
0x04
Enables read-only or read/write access to the committed region of pages. If Data Execution Prevention is enabled, attempting to execute code in the committed region results in an access violation.
PAGE_GUARD
0x100
Pages in the region become guard pages. Any attempt to access a guard page causes the system to raise a STATUS_GUARD_PAGE_VIOLATION exception and turn off the guard page status. Guard pages thus act as a one-time access alarm. For more information, see Creating Guard Pages.When an access attempt leads the system to turn off guard page status, the underlying page protection takes over.If a guard page exception occurs during a system service, the service typically returns a failure status indicator.This value cannot be used with PAGE_NOACCESS.This flag is not supported by the CreateFileMapping function.

NFT: trying to run create_collectibles scripts throws execution reverted error : This is from Patrick Collins Youtube tutorial

Below is the snip of the script: Using Brownie in VS Code
Error: "Gas estimation failed: 'execution reverted'. This transaction will likely revert. If you wish to broadcast, you must set the gas limit manually."
from brownie import AdvancedCollectible, accounts, config
from scripts.helpful_scripts import get_breed
import time
STATIC_SEED = 123
def main():
dev = accounts.add(config["wallets"]["from_key"])
advanced_collectible = AdvancedCollectible[len(AdvancedCollectible) - 1]
transaction = advanced_collectible.createCollectible(
STATIC_SEED, "None", {"from": dev, "gas_limit": 50000}
)
print("Waiting on second transaction...")
# wait for the 2nd transaction
transaction.wait(1)
time.sleep(35)
requestId = transaction.events["requestedCollectible"]["requestId"]
token_id = advanced_collectible.requestIdToTokenId(requestId)
breed = get_breed(advanced_collectible.tokenIdToBreed(token_id))
print("Dog breed of tokenId {} is {}".format(token_id, breed))
I think this has already been answered here. To summarize it, you probably have vrf_coordinator version error. Try the rinkeby's values from the official docs.
I had the same issue. But Patrick is correct, I did not have any Link tokens inside of my newly created contract on Rinkeby network...So I commented out most of the lines of the code in create_collectible.py to import and apply the fund_advanced_collectible() function once more on my contract:
from helpfulscripts import fund_advanced_collectible
def main():
dev=accounts.add(config['wallets']['from_key'])
advanced_collectible= AdvancedCollectible[len(AdvancedCollectible)-1]
# transaction=advanced_collectible.createCollectible(STATIC_SEED,"None", {"from": dev})
# transaction.wait(1)
# time.sleep(35)
# requestID=transaction.events["requestedCollectible"]["requestID"]
# tokenID=advanced_collectible.requestIDToTokenID(requestID)
# breed=get_breed(advanced_collectible.tokenIDToBreed(tokenID))
# print('Dog breed of {} is {}.'.format(tokenID, breed))
fund_advanced_collectible(advanced_collectible)
With a reminder of the function definition of fund_advanced_collectible from helpfulscripts.py:
def fund_advanced_collectible(nft_contract):
dev=accounts.add(config['wallets']['from_key'])
link_token=interface.LinkTokenInterface(config['networks'][network.show_active()]['link_token'])
link_token.transfer(nft_contract, 100000000000000000,{"from":dev})
Once the transaction was confirmed, I could verify in https://rinkeby.etherscan.io/address that my contract had 0.1 Link and so when executing your code again, the error disappeared...

Active Record mySQL Timeout ERROR -- : worker=0 PID:(xxxxx) timeout (61s > 60s), killing

I have a async action on my controller that can perform a heavy SQL query depending on user input.
#results = ActiveRecord::Base
.connection
.select_all(query_string)
.map do |record|
Hashie::Mash.new(record)
end
When it happens, the only response I get from the server is
E, [2020-02-05T16:14:04.133233 #59909] ERROR -- : worker=0 PID:60952 timeout (61s > 60s), killing
E, [2020-02-05T16:14:04.159372 #59909] ERROR -- : reaped #<Process::Status: pid 60952 SIGKILL (signal 9)> worker=0
Is there any way I can capture this timeout on the backend, to give the user the correct feedback?
Tried using Timeout::timeout(x) but with no success.
You could add another, shorter timeout yourself and handle the situation before the worker gets killed. Something like this might be a good start:
require 'timeout'
begin
# 5 seconds before the MySQL timeout would kick in
Timeout.timeout(55) do
# have only the slow query in this block
#database_results = ActiveRecord::Base.connection.select_all(query_string)
end
rescue Timeout::Error
# Handle the timeout. Proper error handling depends on your application.
# In a controller you might just want to return an error page, in a
# background worker you might want to record the error in your database.
end
# time to translate the data should not count towards the timeout
#results = #database_results.map { |r| Hashie::Mash.new(r) }

How to specify gunicorn log max size

I'm running gunicorn as:
guiconrn --bind=0.0.0.0:5000 --log-file gunicorn.log myapp:app
Seems like gunicorn.log keeps growing. Is there a way to specify a max size of the log file, so that if it reaches max size, it'll just override it.
Thanks!!
TLDR;
I believe there might be a "python only" solution using the rotating file handler provided in the internal lib of python. (at least 3.10)
To test
I created a pet project for you to fiddle with:
Create the following python file
test_logs.py
import logging
import logging.config
import time
logging.config.fileConfig(fname='log.conf', disable_existing_loggers=False)
while True:
time.sleep(0.5)
logging.debug('This is a debug message')
logging.info('This is an info message')
logging.warning('This is a warning message')
logging.error('This is an error message')
logging.critical('This is a critical message')
Create the following config file
log.conf
[loggers]
keys=root
[handlers]
keys=rotatingHandler
[formatters]
keys=sampleFormatter
[logger_root]
level=DEBUG
handlers=rotatingHandler
[handler_rotatingHandler]
class=logging.handlers.RotatingFileHandler
level=DEBUG
formatter=sampleFormatter
args=('./logs/logs.log', 'a', 1200, 1, 'utf-8')
[formatter_sampleFormatter]
format=%(asctime)s - %(name)s - %(levelname)s - %(message)s
Create the ./logs directory
Run python test_logs.py
To Understand
As you may have noticed already, the setting that allow for this behaviour is logging.handlers.RotatingFileHandler and the provided arguments args=('./logs/logs.log', 'a', 1200, 10, 'utf-8')
RotatingFileHandler is a stream handler writing to a file. That allow for 2 parameters of interest:
maxBytes set arbitrarily at 1200
backupCount set arbitrarily to 10
The behaviour is that upon reaching 1200 Bytes in size, the file is closed, renamed to /logs/logs.log.<a number up to 10> and a new file is opened.
BUT is any of maxBytes or backupCount is 0. No rotation is done !
In Gunicorn
As per the documentation you can feed a config file.
This could look like:
guiconrn --bind=0.0.0.0:5000 --log-config log.conf myapp:app
You will need to tweak it to your existing setup.
On Ubuntu/Linux, suggest to use logrotate to manage your logs, do like this: https://stackoverflow.com/a/55643449/6705684
Since Python>3.3, With RotatingFileHandler, here is my solution(MacOS/Windows/Linux/...) :
import os
import logging
from logging.handlers import RotatingFileHandler
fmt_str = '[%(asctime)s]%(module)s - %(funcName)s - %(message)s'
fmt = logging.Formatter(fmt_str)
def rotating_logger(name, fmt=fmt,
level=logging.INFO,
logfile='.log',
maxBytes=10 * 1024 * 1024,
backupCount=5,
**kwargs
):
logger = logging.getLogger(name)
hdl = RotatingFileHandler(logfile, maxBytes=maxBytes, backupCount=backupCount)
hdl.setLevel(level)
hdl.setFormatter(fmt)
logger.addHandler(hdl)
return logger
more refer:
https://docs.python.org/3/library/logging.handlers.html#rotatingfilehandler

Using libmysqlclient in multi-threaded application

I am building a C application on Linux platform. I need to use libmysqlclient for interfacing to database.
I downloaded Linux source code package mysql-connector-c-6.0.2.tar.gz. I compiled it as per the instructions. I get the below libraries:
libmysqlclient.a libmysqlclient.so libmysql.so.16
libmysqlclient_r.so libmysql.so libmysql.so.16.0.0
If my application is multi-threaded, can I link my application with libmysqlclient.a? As per mysql documentation (http://forge.mysql.com/wiki/Autotools_to_CMake_Transition_Guide), with cmake tool, clients are always thread safe.
After linking my application with libmysqlclient.a, I get a crash in my application with below call stack:
#0 0x0867878a in my_stat ()
No symbol table info available.
#1 0x08671611 in init_available_charsets.clone.0 ()
No symbol table info available.
#2 0x086720d5 in get_charset_by_csname ()
No symbol table info available.
#3 0x086522af in mysql_init_character_set ()
No symbol table info available.
#4 0x0865266d in mysql_real_connect ()
In my application, I have below code in the thread function:
if (NULL == (pMySQL = mysql_init(NULL)))
{
return -1;
}
if (NULL == mysql_real_connect(pMySQL, ServerName, UserName, Password, Name, Port, NULL, 0))
{
mysql_close(pMySQL);
return -1;
}
if (0 != mysql_query(pMySQL, pQuery))
{
mysql_close(pMySQL);
return -1;
}
mysql_close(pMySQL);
I am not using libmysqlclient_r.so as I want to link to mysql client library statically. Is there any way to generate libmysqlclient_r.a with cmake?
Update:
Without doing anything else, I just changed mysql client build type to debug. Now I get the crash in mysql_init() function.
On the application console, I get below print:
safe_mutex: Trying to lock unitialized mutex at /install/mysqlconnc/mysql-connector-c-6.0.2/mysys/safemalloc.c, line 520
The call stack of the crash is as below:
#0 0x00556430 in __kernel_vsyscall ()
No symbol table info available.
#1 0x45fdf2f1 in raise () from /lib/libc.so.6
No symbol table info available.
#2 0x45fe0d5e in abort () from /lib/libc.so.6
No symbol table info available.
#3 0x086833e5 in safe_mutex_lock (mp=0x915e8e0, my_flags=0,
file=0x895b9d8 "/install/mysqlconnc/mysql-connector-c-6.0.2/mysys/safemalloc.c", line=520)
at /install/mysqlconnc/mysql-connector-c-6.0.2/mysys/thr_mutex.c:178
error = 140915306
__PRETTY_FUNCTION__ = "safe_mutex_lock"
#4 0x08682715 in _sanity (
filename=0x895a87c "/install/mysqlconnc/mysql-connector-c-6.0.2/mysys/my_error.c", lineno=195)
at /install/mysqlconnc/mysql-connector-c-6.0.2/mysys/safemalloc.c:520
irem = 0xf2300468
flag = 0
count = 0
#5 0x0868186b in _mymalloc (size=16,
filename=0x895a87c "/install/mysqlconnc/mysql-connector-c-6.0.2/mysys/my_error.c", lineno=195, MyFlags=16)
at /install/mysqlconnc/mysql-connector-c-6.0.2/mysys/safemalloc.c:130
irem = 0x0
data = 0x0
_db_stack_frame_ = {func = 0x6d617266 <Address 0x6d617266 out of bounds>, file = 0x65685f65 <Address 0x65685f65 out of bounds>,
level = 0, prev = 0x0}
#6 0x0867e0e1 in my_error_register (errmsgs=0x89a7760, first=2000, last=2058)
at /install/mysqlconnc/mysql-connector-c-6.0.2/mysys/my_error.c:194
meh_p = 0x46087568
search_meh_pp = 0x1000
#7 0x08655f7e in init_client_errs ()
at /install/mysqlconnc/mysql-connector-c-6.0.2/libmysql/errmsg.c:238
No locals.
#8 0x08655fe3 in mysql_server_init (argc=0, argv=0x0, groups=0x0)
at /install/mysqlconnc/mysql-connector-c-6.0.2/libmysql/libmysql.c:128
result = 0
#9 0x08651fc0 in mysql_init (mysql=0x0)
at /install/mysqlconnc/mysql-connector-c-6.0.2/libmysql/client.c:1606
Solution:
I put a call to mysql_library_init() before creation of threads and put a call to mysql_library_end() after termination of threads. In each thread, I put a call to mysql_thread_init() at the start of thread function and put a call to mysql_thread_end() at the end of thread function. This solved the problem of crashing.
Update:
It seems that you need to call mysql_library_init() before mysql_init():
You must either call mysql_library_init()
prior to spawning any threads, or else use a mutex to protect the
call, whether you invoke mysql_library_init() or indirectly through
mysql_init(). Do this prior to any other client library call.
Regarding your original question, libmysqlclient_r.so is actually a symbolic link to libmysql.so. You can change libmysql/CMakeLists.txt to produce a static library (libmysql.a) instead by removing the SHARED keyword from the following line:
ADD_LIBRARY(libmysql SHARED ${CLIENT_SOURCES} libmysql.def)
However, I would recommend (1) trying to run the same code without using threads and see if the problem persists, (2) building and using the debug version of the libraries:
cmake -G "Unix Makefiles" -DCMAKE_BUILD_TYPE=Debug
make
This way you could investigate the problem in more details.