So I have some simple unit tests setup in busted. I am a little new to LUA, so I may be missing something obvious.
When I run:
lua test.lua
I get expected results (7 succeed, 1 failed on purpose to try out busted) in the nice terminal output.
My ultimate goal however is to output JSON results, and have a script that consumes JSON from multiple tests to make some summary pages for my fellow WoW addon developers.
When I run:
lua test.lua -o json
my terminal pauses for a brief second, and I am returned to the command line.
There is no terminal output, nor is any file created.
I am relatively new to lua and busted in general, could you provide me any pointers?
Here is a screenshot:
And here is a link to Busted's website.
The issue in question was caused by dkjson module not using functions in tables properly. The bug was fixed in pull request #449, so, You should wait for the fix to get to next release candidate (>2.0.rc10-0) of Busted or just download and build recent version from here. Btw, relevant bug report - #448.
Related
I've also raised this here https://github.com/nkdAgility/azure-devops-migration-tools/issues/1241 as a followup to https://github.com/nkdAgility/azure-devops-migration-tools/issues/1189 which i solved but essentially I'm still on the same processor the first of seven in my plan.
I am trying to do a single project migration from TFS 2019 to ADO Version
Dev19.M204.1 (AzureDevOps_M204_20220601.5)
I have 7 processors I'd like to get working however I just have the 1 set to enable right now as a starter and plan to work my way through them.
I'm getting an error telling me the TfsAreaAndIterationProcessor TfsEndpoint needs to be of $type = TfsWorkItemEndpoint and displays a System exception. However as far as I can see that's exactly what I have in my config file so not sure what I'm missing here.
2022-08-22 16:41:13.707 +12:00 [FTL] Error while running TfsAreaAndIterationProcessor
System.Exception: The Source endpoint configured must be of type TfsWorkItemEndpoint
at MigrationTools.Processors.TfsAreaAndIterationProcessor.EnsureConfigured() in D:\a\1\s\src\MigrationTools.Clients.AzureDevops.ObjectModel\Processors\TfsAreaAndIterationProcessor.cs:line 67
at MigrationTools.Processors.TfsAreaAndIterationProcessor.InternalExecute() in D:\a\1\s\src\MigrationTools.Clients.AzureDevops.ObjectModel\Processors\TfsAreaAndIterationProcessor.cs:line 38
at MigrationTools.Processors.Processor.Execute() in D:\a\1\s\src\MigrationTools\Processors\Processor.cs:line 106
One of the main difficulties I'm having with this entire process is that the available documentation seems a little sparse in terms of specific examples especially up to date examples that will work in 2022 with the 12.0.12.0 version of the tool which I'm trying to use so please bear with me as I have no experience in doing a migration to date.
Any assistance appreciated.
Additional info to explain my process: I'm assuming that this processor has to work correctly before the next processor I have lined up will then work e.g. the TfsTeamSettingsProcessorOptions and so on for sharedqueries, workitems, test configutations and so on?
I am currently trying to solve a reversing challenge, where c code is compiled for a 32bit linux system.
To solve this challenge I am trying to make use of ghidra but am faced with a few issues. A bit of a summary what I have done up to this point:
I have two OS available to me, one 64bit Linux System on my Laptop and this 64bit Windows 10. Apparantly the programm was compiled with gcc without a -g option making ghidra fail to debug the programm. Manually debugging it with gdb in Terminal is possible but terrible to use (at least for me).
So all I can do is look at the assembler code in the CodeBrowser of Ghidra and its respective decomipled c code. With that I got to understand that some of the instructions are decrypted during the runtime of the programm and in order to further analyse the code, I want to be able to execute parts of the instructions to slowly but surely decrypt and understand the hidden parts of the programm.
That being said, the only issue here is that I do not know how I can do that. I have noticed that ghidra has the ability to run java code, but all the examples I looked at that were provided by ghidra allow me to only patch hardcoded instructions into the programm but not to actually execute/evaluate them.
My specific issue at hand is following part of the programm (green marked part):
Ghidra has all the knowledge it needs to execute this part and I just do not know how to do that. I could of cause do it by hand, but that is just boring and not really why I am doing these challenges and that is the same reason as why I am not looking for finished scripts that unpack this programm for me but for a way to execute my analysis.
Finally to summarize my question: I am asking for a way to execute the green marked decrypting part of the targeted programm in ghidra without starting the debugger (since the ghidra debugger keeps failing on me).
I think you are mixing up a few things here. You say:
the programm was compiled with gcc without a -g option making ghidra fail to debug the programm
The debug information added with -g makes it easier to analyze and debug a program because you have information that would have otherwise have to be recovered by reverse engineering. This should not have an influence on whether you can run the program under a debugger in the first place, and as you noted running it with gdb in the terminal works. The Ghidra debugger basically just runs gdb in the background and attaches to it to exchange information, so it should work.
You have a few options now:
1. Get the Ghidra Debugger to run with this binary
Whatever issue you are encountering with the Ghidra debugger is probably a valid question for https://reverseengineering.stackexchange.com/
From then on you can pursue your initial plan to solve this via debugging.
2. Write a GhidraScript to reimplement the decryption
Understand the basic idea of what you recognized correctly as some kind of decryption loop. Then you can use one of Ghidra's scripting options[0] to write a simple script that reimplements this decryption, but writes the decrypted values to the Ghidra memory directly.
Any scripting language will obviously include basic arithmetic operations like + -, and xor and loops, and the Ghidra API provides the functions byte getByte(Address address) and setByte(Address address, byte value). If you encounter any issues or API questions while writing this script that will also be a valid follow up question for the RE Stack Exchange.
This approach has the advantage that you can then statically analyse the resulting data inside Ghidra again, e.g. disassemble the resulting code.
[0] Ghidra natively supports Python 2.7 and Java based Scripts and a rudimentary Python REPL, but there are other options like Jupyter and Script based Kotlin or Ruby, Kotlin and Clojure Scripts
I've been experimenting with loading functions from the Windows system DLLs using only the loader functions exported by NTDLL. This works as expected. For the sake of curiosity and getting an even better understanding of the process structure in NT-based systems, I've started trying to load functions from NTDLL by doing the following steps:
Load the PEB of the process from gs:[60h]
Iterate over the modules loaded into the process according to the loader to find NTDLL's base address
Parse the PE headers of NTDLL
Try to parse the export table to find LdrLoadDll, LdrGetDllHandle, and LdrGetProcedureAddress
This fails at step 4. After stepping through it in a debugger (both VS2019 and WinDbg Preview), it seems as though the offsets I've tried yield an invalid structure that leads to an access violation when my code compares the current function name to one of the ones I'm searching for. My code is being compiled and run on a 64-bit copy of Windows 10 Pro build 21364. Note that I'm using my own header that contains definitions for the structures used for this (these definitions are from winnt.h and here) because the Windows headers don't really play nice with the rest of my code. The function trying to do this is here. For the record, this is part of an attempt to implement my own libc (again, for the sake of curiosity). The code that calls the functions is here. Any help with this is tremendously appreciated.
Nevermind, turns out I had outdated verbose definitions of the structures I was using. I found better (more up-to-date) definitions at https://vergiliusproject.com.
I need to perform push-ins for my json controllers, however, doing it via STS will make it really tedious. For my demo project, it works since it only contains 10 pojos, but for real world project this may become 20-50 pojos.
Is there a way to perform push-ins via command line or any way to automate it?
I am asking due to my previous issue which cannot be solved by spring-roo's current version :
RooWebJson and KendoUI Grid
No, there is no way to push-in code via command line. The best way is to use STS, but note you will push-in the code only one time.
When attempting to use pandoc to convert JSON based files (.ipynb) from iPython notebook (0.12), I receive an error stating "bad decodeArgs" for the JSON. I suspect that it may be due to the Ubuntu provided version of pandoc that I am using (1.8.1.1). It seems that getting the latest pandoc version requires setting up the Haskell platform which I was not successful doing because of dependency challenges (and really don't want to). I don't want to spend any more time trying to install Haskell if this is not my problem.
Is there a way to get the latest pandoc binaries for Ubuntu without rebuilding it?
Given that iPython notebook is new (and very cool!!), it would be nice to hear about experiences related to translating the JSON to other formats. Perhaps there is a different way to accomplish this other than pandoc.
Regarding your "keeping up to date with Pandoc", I'm afraid you do need Haskell installed. The best way to do this via the Haskell Platform ("HP") package, and then just like with Ruby, it's a lot more consistent to use the environment's package manager for dependencies than your OS. I've had no trouble getting it working, even in Windoze. . .
I'm sure questions to the Haskell mailing list would result in quick help for a platform as mainstream as Debian/Ubuntu, but you might need to manually install a newer version of HP that what's available through the OS package manager.
Once you get HP up and running, the dev Pandoc is dead easy to compile, and git will keep you up to date with the latest - specific instructions here, currently maintained:
https://github.com/jgm/pandoc/wiki/Installing-the-development-version-of-pandoc-1.9
Note that v1.9 has now been officially released if you really don't want to go to the trouble of keeping up to date with the dev cycle, but of course again you won't get it in your OS package manager for quite some time after that (I assume anyway).
==========================
Regarding your attempts to treat JSON as a document syntax:
The best syntax inputs for Pandoc at this point are its native markdown+extensions, and reST (especially for Python people/environments), basically maintained as functionally equivalent, although there may be features available in the former that aren't represented in the latter, since John can just add extensions anytime he wants. AFAIK Pandoc hasn't begun to support the Sphinx extensions (yet?)
The JSON format used internally within Pandoc isn't documented (yet?) but it's the native Haskell data type. As the Thomas K notes, there may be some similarity between how the two tools represent data, but probably not enough to treat either as "just another markup format".
However, if you're working on this, it's easy enough to see what Pandoc looks for in the way of JSON input.
pandoc -t json
compare this to
pandoc -t native
and it's easy to see the specs created by Text.Pandoc.Definition and Text.JSON.Generic
Using Pandoc's internal data representation as input would obviously be more stable than a marked up text stream, and others have expressed a desire for documentation on this and it would be a great contribution to the community.
Please do inform the Pandoc mail list of any work done in this area. The crew there is very responsive, including getting quick feedback from John M (the lead developer) himself directly.
I doubt pandoc or any other tool knows what to do with ipynb files yet (at the time of writing, the IPython notebook was released less than a month ago). JSON is just a generic data structure like XML, not a document format.
We're (IPython) working on tools to export notebooks to other formats, but they're not ready for a proper release yet. If you want to help develop that, see this mailing list thread. Hopefully it will be part of the next IPython release.