Build gradle kotlin dsl task extension method - build.gradle

I want to modify my build.gradle.kts by implementing some tasks. Specially, I want to obtain the output of the first task in my second task, where the first task runs a shell command. There are some basic examples here and here, which are implemented in groovy dsl. Now, I need this functionality in kotlin dsl.
A working example is:
task<Exec>("avdIsRunning") {
commandLine("adb", "devices")
standardOutput = ByteArrayOutputStream()
}
task("task2") {
dependsOn("avdIsRunning")
doLast {
val standardOutput = (tasks.getByName("avdIsRunning") as Exec).standardOutput.toString()
println("Foo's output: $standardOutput")
}
}
What I want is, to call a extension method avdIsRunning.output() that provides the standradOutput of avdIsRunning-task, compare to the examples I linked above.

Related

Scala Mock: MockFunction0-1() once (never called - UNSATISFIED)

I'm working on a scala object in order to perform some testing
My start object is as follows
object obj1 {
def readvalue : IO[Float] = IO{
scala.io.StdIn.readFloat()
}
}
The testing should be
1- value of type FLOAT
2- should be less than 3
As we can not mock singleton objects I've used mocking functions here is what I've done.
class FileUtilitiesSpec
extends FlatSpec
with Matchers
with MockFactory
{
"value" should "be of Type Float" in {
val alpha = mockFunction[() => IO[Float]]
alpha.expects shouldBe a[IO[Float]]
}
"it" should "be less than 3" in {
val alpha = mockFunction[() => IO[Float]]
alpha.expects shouldBe <(3)
}
}
Im getting an error saying that :
MockFunction0-1() once (never called - UNSATISFIED) was not an instance of cats.effect.IO, but an instance of org.scalamock.handlers.CallHandler0
ScalaTestFailureLocation: util.FileUtilitiesSpec at (FileUtilitiesSpec.scala:16)
Expected :cats.effect.IO
Actual :org.scalamock.handlers.CallHandler0 ```
I would recommend reading the examples here as a starting point: https://scalamock.org/quick-start/
Using mock objects only makes sense if you are planning to use them in some other code, e.g. dependencies you do not want to make part of your module under test, or are beyond your control.
An example might be a database connection where you would depend on an actual system, making the code hard to test without simulating it.
The example you provided only has mocks, but no code using it, hence the error you are getting is absolutely correct. The mock expects to be used, but was not called.
The desired behaviour for a mocking library in this case is to make the test fail as the developer intended for this interaction with a mock to happen, but it was not recorded - so something is wrong.

Gradle - using a function from within Ant

I have a Gradle script with few invocations of XJC to generate JAXB classes from XSD.
I thought I could parametrize these invocations and reuse the common code.
So I created a function:
ext.generateJaxbClasses = { HashMap params ->
project.ant {
...
And then I wanted to use it:
task genJaxb {
ext.generic = [
schema: "..."
]
doLast() {
ext.generateJaxbClasses(jaxbSetA)
ext.generateJaxbClasses(jaxbSetB)
}
But I get this error:
> No signature of method: org.gradle.internal.extensibility.DefaultExtraPropertiesExtension.generateJaxbClasses() is applicable for argument types: (LinkedHashMap) values: [[...]]
How can I use the function within a task definition?
Use project.ext.generateJaxbClasses(jaxbSetA)
Using ext inside the task resolves to searching for the property inside the task's extension container.
See ExtensionAware.
I would suggest using an actual free function inside the project rather than using extensions as it can lead to this kind of frustration.
def generateJaxbClasses(HashMap params) {
project.ant {...}
}

Import XSD to OpenAPI

I have some model definition inside a XSD file and I need to reference these models from an OpenApi definition. Manually remodeling is no option since the file is too large, and I need to put it into a build system, so that if the XSD is changed, I can regenerate the models/schemas for OpenApi.
What I tried and what nearly worked is using xsd2json and then converting it with the node module json-schema-to-openapi. However xsd2json is dropping some of the complexElement models. For example "$ref": "#/definitions/tns:ContentNode" is used inside of one model as the child type but there is no definition for ContentNode in the schema, where when I look into the XSD, there is a complexElement definition for ContentNode.
Another approach which I haven't tried yet but seems a bit excessive to me is using xjb to generate Java models from the XSD and then using JacksonSchema to generate the json schema.
Is there any established library or way, to use XSD in OpenApi?
I ended up implementing the second approach using jaxb to convert the XSD to java models and then using Jackson to write the schemas to files.
Gradle:
plugins {
id 'java'
id 'application'
}
group 'foo'
version '1.0-SNAPSHOT'
sourceCompatibility = 1.8
repositories {
mavenCentral()
}
dependencies {
testCompile group: 'junit', name: 'junit', version: '4.12'
compile group: 'com.fasterxml.jackson.module', name: 'jackson-module-jsonSchema', version: '2.9.8'
}
configurations {
jaxb
}
dependencies {
jaxb (
'com.sun.xml.bind:jaxb-xjc:2.2.7',
'com.sun.xml.bind:jaxb-impl:2.2.7'
)
}
application {
mainClassName = 'foo.bar.Main'
}
task runConverter(type: JavaExec, group: 'application') {
classpath = sourceSets.main.runtimeClasspath
main = 'foo.bar.Main'
}
task jaxb {
System.setProperty('javax.xml.accessExternalSchema', 'all')
def jaxbTargetDir = file("src/main/java")
doLast {
jaxbTargetDir.mkdirs()
ant.taskdef(
name: 'xjc',
classname: 'com.sun.tools.xjc.XJCTask',
classpath: configurations.jaxb.asPath
)
ant.jaxbTargetDir = jaxbTargetDir
ant.xjc(
destdir: '${jaxbTargetDir}',
package: 'foo.bar.model',
schema: 'src/main/resources/crs.xsd'
)
}
}
compileJava.dependsOn jaxb
With a converter main class, that does something along the lines of:
package foo.bar;
import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.databind.JsonMappingException;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.fasterxml.jackson.module.jsonSchema.JsonSchema;
import com.fasterxml.jackson.module.jsonSchema.JsonSchemaGenerator;
import foo.bar.model.Documents;
public class Main {
public static void main(String[] args) {
ObjectMapper mapper = new ObjectMapper();
JsonSchemaGenerator schemaGen = new JsonSchemaGenerator(mapper);
try {
JsonSchema schema = schemaGen.generateSchema(Documents.class);
System.out.print(mapper.writerWithDefaultPrettyPrinter().writeValueAsString(schema));
} catch (JsonMappingException e) {
e.printStackTrace();
} catch (JsonProcessingException e) {
e.printStackTrace();
}
}
}
It is still not perfect though,... this would need to iterate over all the model classes and generate a file with the schema. Also it doesn't use references, if a class has a member of another class, the schema is printed inline instead of referencing. This requires a bit more customization with the SchemaFactoryWrapper but can be done.
The problem you have is that you are applying inference tooling over a multi-step conversion. As you have found, inference tooling is inherently fussy and will not work in all situations. It's kind of like playing Chinese whispers - every step of the chain is potentially lossy, so what you get out the other end may be garbled.
Based on the alternative approach you suggest, I would suggest a similar solution:
OpenAPI is, rather obviously, an API definition standard. It should be possible for you to take a code first approach, composing your API operations in code and exposing the types generated from XJB. Then you can use Apiee and its annotations to generate the OpenAPI definition. This assumes you are using JAX-RS for your API.
This is still a two-step process, but one with a higher chance of success. The benefit here is that your first step, inferring your XSD types into java types, will hopefully have very little (if any) impact on the code which defines your API operations. Although there will still be a manual step (updating the models) the OpenAPI definition will update automatically once the code has been rebuilt.

SpringBatch - how to set up via java config the JsonLineMapper for reading a simple json file

How to change from "setLineTokenizer(new DelimitedLineTokenizer()...)" to "JsonLineMapper" in the first code below? Basicaly, it is working with csv but I want to change it to read a simple json file. I found some threads here asking about complex json but this is not my case. Firstly I thought that I should use a very diferent approach from csv way, but after I read SBiAch05sample.pdf (see the link and snippet at the bottom), I understood that FlatFileItemReader can be used to read json format.
In almost similiar question, I can guess that I am not in the wrong direction. Please, I am trying to find the simplest but elegant and recommended way for fixing this snippet code. So, the wrapper below, unless I am really obligated to work this way, seems to go further. Additionally, the wrapper seems to me more Java 6 style than my tentative which takes advantage of anonimous method from Java 7 (as far as I can judge from studies). Please, any advise is higly appreciated.
//My Code
#Bean
#StepScope
public FlatFileItemReader<Message> reader() {
log.info("ItemReader >>");
FlatFileItemReader<Message> reader = new FlatFileItemReader<Message>();
reader.setResource(new ClassPathResource("test_json.js"));
reader.setLineMapper(new DefaultLineMapper<Message>() {
{
setLineTokenizer(new DelimitedLineTokenizer() {
{
setNames(new String[] { "field1", "field2"...
//Sample using a wrapper
http://www.manning.com/templier/SBiAch05sample.pdf
import org.springframework.batch.item.file.LineMapper;
import org.springframework.batch.item.file.mapping.JsonLineMapper;
import com.manning.sbia.ch05.Product;
public class WrappedJsonLineMapper implements LineMapper<Product> {
private JsonLineMapper delegate;
public Product mapLine(String line, int lineNumber) throws Exception {
Map<String,Object> productAsMap
= delegate.mapLine(line, lineNumber);
Product product = new Product();
product.setId((String)productAsMap.get("id"));
product.setName((String)productAsMap.get("name"));
product.setDescription((String)productAsMap.get("description"));
product.setPrice(new Float((Double)productAsMap.get("price")));
return product;
}
public void setDelegate(JsonLineMapper delegate) {
this.delegate = delegate;
}
}
Really you have two options for parsing JSON within a Spring Batch job:
Don't create a LineMapper, create a LineTokenizer. Spring Batch's DefaultLineMapper breaks up the parsing of a record into two phases, parsing the record and mapping the result to an object. The fact that the incoming data is JSON vs a CSV only impacts the parsing piece (which is handled by the LineTokenizer). That being said, you'd have to write your own LineTokenizer to parse the JSON into a FieldSet.
Use the provided JsonLineMapper. Spring Batch provides a LineMapper implementation that uses Jackson to deserialize JSON objects into java objects.
In either case, you can't map a LineMapper to a LineTokenizer as they accomplish two different things.

Why is this groovy code throwing a MultipleCompilationErrorsException?

I have the following groovy code :
class FileWalker {
private String dir
public static void onEachFile(String dir,IAction ia) {
new File(dir).eachFileRecurse {
ia.perform(it)
}
}
}
walker = new FileWalker()
walker.onEachFile(args[0],new PrintAction())
I noticed that if I place a def in front of walker , the script works. Shouldn't this work the way it is now ?
You don't need a def in groovyConsole or in a groovy script. I consider it good programming practice to have it, but the language will work without it and add those types of variables to the scripts binding.
I'm not sure about the rest of your code (as it won't compile as you've posted it). But you either have a really old version of groovy or something else is wrong with your config or the rest of your code.
With the addition of a stub for the missing IAction interface and PrintAction class, I'm able to get it to run without modification:
interface IAction {
def perform(obj)
}
class PrintAction implements IAction{
def perform(obj) {
println obj
}
}
class FileWalker {
private String dir
public static void onEachFile(String dir,IAction ia) {
new File(dir).eachFileRecurse {
ia.perform(it)
}
}
}
walker = new FileWalker()
walker.onEachFile(args[0],new PrintAction())
I created a dummy directory with "foo/bar" and "foo/baz" files.
If I save it to "walkFiles.groovy" and call it from the command line with
groovy walkFiles.groovy foo
It prints:
foo/bar
foo/baz
This is with the latest version of groovy:
groovy -v
Groovy Version: 1.6-RC-3 JVM: 1.5.0_16
In scripting mode (or via "groovyConsole"), you need a declaration of walker with "def" before using it. A Groovy script file is translated into a derivative class of class Script before it get compiled. So, every declaration needs to be done properly.
On the other hand, when you're running a script in "groovysh" (or using an instance of class GroovyShell), its mechanism automatically binds every referencing object without the need of declaration.
updated:
My above answer would be wrong as I decompiled a .class of Groovy and found that it's using a binding object inside the script as well. Thus my first paragraph was indeed wrong.