I'm completely new to python, but have an async python app using uvloop which uses a C api module I created which also needs access to the async loop.
1) asyncio does not yet have a c-api for this? Any hacks to get an event loop usable in C? Is this being discussed anywhere?
2) uvloop uses libuv which I am familiar with in C. If I can grab the uv_loop_t pointer I could hook into the loop. I assume I can either:
A) With a PyObject * to uvloop's loop calculate the offset to the uv_loop_t* and use that? Assuming I knew the length of PyObject_HEAD?
libuv_loop = (uv_loop_t*)((void*)(loop)+0x8);
struct __pyx_obj_6uvloop_4loop_Loop {
PyObject_HEAD
uv_loop_t *uvloop;
B) Or non hacky modify uvloop to expose the loop pointer. I'm completely clueless here as I've never looked at cython code. Can I create a python function on the loop, call it from my C code and get the C pointer? Like:
(uv_loop_t*)PyObject_CallFunctionObjArgs( getLoop, NULL )
By adding getLoop to here:
https://github.com/MagicStack/uvloop/blob/master/uvloop/loop.pyx
cdef uv.uv_loop_t* _getLoop(self):
return self.uvloop
asyncio has no C API yet.
We have a plan for adding it in future Python versions (3.8 maybe).
Right now you should use PyObject_* api.
uvloop is written in Cython but the library has no Public C API as well. You may access private uvloop API but exposed function names and data structures can be changed in any moment without public notice because they are considered private, users should never use it.
Was looking for this too, and coincidentally, it just so happens that uvloop added a loop.get_uv_loop_t_ptr() method a few days ago :)
https://github.com/MagicStack/uvloop/pull/310
Now we just have to wait for a new release (v0.17 ?) that includes this PR (or build it ourselves).
Related
when I reverse the binary with IDA gui, all the functions get decompiled without a problem.
but when I am running an automatic script on ida without gui, there is always the same function, that refuses to be decompiled. (when I am openning the same IDB that the automation script worked on, the function get decompiled without a problem)
I am using bip. and using BipFunc.can_decompile to check if a function can get decompiled.
EDIT:
according to an answer bellow, I have tried to add the following:
if not func.can_decompile:
print(f"can't decompile function 0x{func.ea:04x}, trying again")
decomp_all()
if not func.can_decompile:
print(f"can't decompile function 0x{func.ea:04x}, trying again")
decomp_all_twice_cacheclear()
if not func.can_decompile:
print(f"can't decompile function 0x{func.ea:04x}, skipping...")
return
sadly it did not work, I get all 3 prints every time, even on different binaries
it seems to be fixed on IDA Pro 7.6
There is several reason you can get an error on the decompilation from IDA. If it works on some case and other it does not it is probably because of the call analysis. When decompiling a function IDA will try to gather information on the function called by this one and in some case fail to get those information which will make the decompilation fail. But once that function has been decompiled, the information fetched by IDA will be updated, and so the decompilation of the caller function might now work. So basically it means you have to decompile the function in an order, which is a pain, for fixing that the simplest way is to just decompile everything twice, but it can take quite some time if you do it on "big" binaries.
I though I put that in the Bip repository somewhere but I can't find it, so here is a small plugin/code which should allows to do that:
from bip import *
class DecompileAll(BipPlugin):
"""
Plugin for decompiling all the function in the binary.
"""
#menu("Bip/DecompileAll/", "Invalidate hexrays caches")
def clear_hxcCache(self):
HxCFunc.invalidate_all_caches()
#menu("Bip/DecompileAll/", "Decompile all func")
def decomp_all(self):
count = 0
for f in HxCFunc.iter_all():
count += 1
print("0x{:X} functions decompiled".format(count))
#menu("Bip/DecompileAll/", "Decompile twice with cache clear")
def decomp_all_twice_cacheclear(self):
HxCFunc.invalidate_all_caches()
self.decomp_all()
self.decomp_all()
Just for information the basic reason for decompilation error, is that it is not able to make a correct translation of some piece of code because it does not understand the assembly, this is typically true if there is a problem during the analysis and the code is not correctly detected (also happens a lot if you are dealing with obfuscation). You can typically view this case by an error telling you "failed analysis at ADDR" in the IDAPython console, and then look at the problem. Probably not your case but might still help.
Glad to hear you are using bip. So about the BipFunc.can_decompile function: like indicated in the documentation (https://synacktiv.github.io/bip/build/html/base/func.html#bip.base.BipFunction.can_decompile) it will just try to decompile the function and see if an error occurs. The code is pretty straight forward (https://github.com/synacktiv/bip/blob/master/bip/base/func.py#L372), this is mostly written for being done while using one-liner, its the same thing as catching the exception when trying to decompile.
I am trying to find a clean way to access the regmap that is used with *RegisterNode for creating documentation and testing files. The TLRegisterNode has methods for generating the json through some Annotations. These are done in the regmap method by adding them to the ElaborationArtefacts object. Other protocols don't seem to have these annotations.
Is there anyway to iterate over the "regmap" Register Fields post elaboration or during?
I cannot just access the regmap as it's not really a val/var since it's a method. I can't quite figure out where this information is being stored. I don't really believe it's actually "storing" any information as much as it is simply creating the hardware to attach the specified logic to the RegisterNode based logic.
The JSON output is actually fine for me as I could just write a post processing script to convert JSON to my required formats, but I'm wondering if I can access this information OR if I could add a custom function call at the end. I cannot extend the case class *RegisterNode, but I'm not sure if it's possible to add custom functions to run at the end of the regmap method.
Here is something I threw together quickly:
//in *RegisterRouter.scala
def customregmap(customFunc: (RegField.Map*) => Unit, mapping: RegField.Map*) = {
regmap(mapping:_*)
customFunc(mapping:_*)
}
def regmap(mapping: RegField.Map*) = {
//normal stuff
}
A user could then create a custom function to run and pass it to the regmap or to the RegisterRouter
def myFunc(mapping: RegField.Map*): Unit = {
println("I'm doing my custom function for regmap!")
}
// ...
node.customregmap(myFunc,
0x0 -> coreControlRegFields,
0x4 -> fdControlRegFields,
0x8 -> fdControl2RegFields,
)
This is just a quick example I have. I believe what would be better, if something like this was possible, would be to have a Seq of functions that could be added to the RegisterNode that are ran at the end of the regmap method, similar to how TLRegisterNode currently works. So a user could add an arbitrary number and you still use the regmap call.
Background (not directly part of question):
I have a unified register script that I have built over the years in which I describe the registers for a particular IP. It works very similar to the RegField/node.regmap, except it obviously doesn't know about diplomacy and the like. It will generate the Verilog, but also a variety of files for DV (basic `defines for simple verilog simulations and more complex uvm_reg_block defines also with the ability to describe multiple of the IPs for a subsystem all the way up to an SoC level). It will also print out C Header files for SW and Sphinx reStructuredText for documentation.
Diplomacy actually solves one of the main issues I've been dealing with so I'm obviously trying to push most of my newer designs to Chisel/Diplo.
I ended up solving this by creating my own RegisterNode which is the same as the rocketchip RegisterNodes except that I use a different Elaboration Artifact to grab the info and store it for later.
I am using a third-party library that is a wrapper over some C functions. Unfortunately, nearly all of the Go functions are free (they don't have a receiver, they are not methods); not the design approach I would have taken but it is what I have.
Using just Go's standard "testing" library:
Is there a solution that allows me to create tests where I can mock functions?
Or is the solution to wrap the library into structures and interfaces, then mock the interface to achieve my goal?
I have created a monte carlo simulation that also process the produced dataset. One of my evaluation algorithms looks for specific models that it then passes the third-party function for its evaluation. I know my edge cases and know what the call counts should be, and this is what I want to test.
Perhaps a simple counter is all that is needed?
Other projects using this library, that I have found, do not have full coverage or no testing at all.
You can do this by using a reference to the actual function whenever you need to call it.
Then, when you need to mock the function you just point the reference to a mock implementation.
Let's say this is your external function:
// this is the wrapper for an external function
func externalFunction(i int) int {
return i * 10 // do something with input
}
You never call this directly but instead declare a reference to it:
var processInt func(int) int = externalFunction
When you need to invoke the function you do it using the reference:
fmt.Println(processInt(5))
Then, went you want to mock the function you just assign a mock implementation to that reference:
processInt = mockFunction
This playground puts it together and might be more clear than the explanation:
https://play.golang.org/p/xBuriFHlm9
If you have a function that receives a func(int) int, you can send that function either the actual externalFunction or the mockFunction when you want it to use the mock implementation.
I want to create a Lambda function that runs through S3 files and if needed triggers other Lambda functions to parse the files in parallel.
Is this possible?
Yes it's possible. You would use the AWS SDK (which is included in the Lambda runtime environment for you) to invoke other Lambda functions, just like you would do in code running anywhere else.
You'll have to specify which language you are writing the Lambda function in if you want a more detailed answer.
If I understand your problem correctly you want one lambda that goes through a list of files in a S3-bucket. Some condition will decide whether a file should be parsed or not. For the files that should be parsed you want another 'file-parsing' lambda to parse those files.
To do this you will need two lambdas - one 'S3 reader' and one 'S3 file parser'.
For triggering the 'S3 file parser' lambda you have many few different options. Here are a two:
Trigger it using a SNS topic. (Here is an article on how to do that). If you have a very long list of files this might be an issue, as you most likely will surpass the number of instances of a lambda that can run in parallel.
Trigger it by invoking it with the AWS SDK. (See the article 'Leon' posted as comment to see how to do that.) What you need to consider here is that a long list of files might cause the 'S3 reader' lambda that controls the invocation to timeout since there is a 5 min runtime limit for a lambda.
Depending on the actual use case another potential solution is to just have one lambda that gets triggered when a file gets uploaded to the S3 bucket and let it decide whether it should get parsed or not and then parse it if needed. More info about how to do that can be found in this article and this tutorial.
I've seen that some projects used _ function that takes string as an argument, like _("Hello World"). But I couldn't find any manuals or articles about what is it and how to use it.
I guess this has something to do with i18n and l10n (it was mentioned in some article I found on the internet), but can you explain to me how it works and how to use it?
That is the GNU gettext localization function. You can provide language specific alternate strings for the one specified in the function call.
There is the xgettext tool, which generates a .pot file (abbreviation for portable object template) from your application code, then translators can make .po localization files for it. Then, you can bundle these with your application, and deliver a more widely usable piece of software.
I18n. See gettext example here: https://ewgeny.wordpress.com/2012/05/10/supporting-multiple-languages-in-your-application-a-simple-gettext-step-by-step-example/
Also found some info about what exactly this function do, it seems to be the macro for Glib.dgettext() function in Vala, this is from valadoc.org:
dgettext
public unowned string dgettext (string? domain, string msgid)
This function is a wrapper of dgettext which does not translate the message if the default domain as set with textdomain has no translations for the current locale.
...
Applications should normally not use this function directly, but use the _ macro for translations.