Spring-Roo Push In via Command Line - json

I need to perform push-ins for my json controllers, however, doing it via STS will make it really tedious. For my demo project, it works since it only contains 10 pojos, but for real world project this may become 20-50 pojos.
Is there a way to perform push-ins via command line or any way to automate it?
I am asking due to my previous issue which cannot be solved by spring-roo's current version :
RooWebJson and KendoUI Grid

No, there is no way to push-in code via command line. The best way is to use STS, but note you will push-in the code only one time.

Related

How to manage JSON-Schemas for multiple projects?

Suppose you have a Schema that is used in a UI-App (e.g. Vue), a Node.js or Springboot Server and has to validate against Databases (e.g. SQL, mongoDB,...), and maybe some Micro-services running on whatever.
How and where do I manage a this JSON-Schema, so that if I have to change the schema for whatever Reason, that every architectural component can handle the new JSON-Schema(s).
Otherwise I need to update the Schema in up to 10 projects so none is incompatible.
Is it really as simple as having a git project full with just JSON-Schemas or do I need specific loaders for each language/environment?
Are there best practices that I am unaware of?
PS: I don't really think I need the automatically synchronized on runtime, so don't really think I need another Microservice to achieve that.
That being said, if a Microservice is the best way to go, then getting a Microservice it is.
If you keep them in a git project, how do you load them? Clone the project each time the app starts? It may work, but I would go with a more flexible approach that should take too much effort to be done:
Build a JSON schema repository accessible via a REST API
When the app starts, it makes a request to grab the schema (latest, or a specific version)
That way you get an uniform (and scalable) way of playing with the schemas. Even if you think about a hot-reload sometime in the future, you can leverage this approach to do that.
Here is an old project in this direction, you may give it a shot to see if it works (or for some inspiration, at least)

Cucumber examples reuse in different features/scenarios

I've been using cucumber for awhile and I've stumbled upon a problem:
Actual question:
Is there a solution to import the examples from a single file/db using cucumber specifically as examples?
Or alternatively is there a way to define a variable while already in-step to be an example?
Or alternatively again, is there an option to send the examples as variables when I launch the feature file/scenario?
The Problem:
I have a couple of scenarios where I would like to use exactly the same examples, over and over again.
It sounds rather easy, but the examples table is very large (more specifically it contains all the countries in the world and their appropriate continents). Thus repeating it would be very troublesome, especially if the table needs changing (I will need to change all the instances of the table separately)
Complication:
I have a rerun function that knows when a specific example failed and reruns it after the test is done.
Restrictions:
I do not want to edit my rerun file
Related:
I've noticed that there is already an open discussion about importing it from csv here:
Importing CSV as test data in Cucumber?
However that discussion is invalid to me because I have the rerun function that only knows to work only with examples, and the solution suggested there ruins that.
Thank you!
You can use CSV and other external file systems with QAF using different BDD syntax.
If you want to use cucumber steps or cucumber runner, you can use QAF-cucumber and BDD2 (preferred) or Gherkin syntax. QAF-cucumber will enable external test data and other qaf features with cucumber.
Below is the example feature file uses BDD2 syntax can be run using TestNG or Cucumber runner.
Feature: feature uses external data file
#datafie:resources/${env}/testdata.csv
#regression
Scenario: Another scenario exploring different combination using data-provider
Given a "${precondition}"
When an event occurs
Then the outcome should "${be-captured}"
testdata.csv file may look like:
TestcaseId,precondition,be-captured
123461,abc,be captured
123462,xyz,not be captured
You can run using TestNG or Cucumber runner. You can use any of inbuilt data provider or custom as well.

Trace Flash Builder compiling commands

is there a way to trace the compiler command for flash builder? I mean, I want to know the parameters and files that is compiling internally when I click "build" on FB.
Basically I moved a project to Flash Builder, and everything works fine but I have some runtime issues, and looks like the compiler is doing something wrong with some files (like using old files instead of using the one im changing, this occur only for a particular file, the rest works fine or I think that works fine). Also is different the way to embed some file, that's another reason to check what's doing internally.
I ran the game with mxmlc before, and probably I can compare what's the difference if I get the command executed by FB.
Also, I want to know how to do it if I need to research something in future.
Thanks for any help,
Regards
Flash Builder only recompiles if there has been a change to the code. So if you are changing an asset (image), for example, you won't recompile unless you also make a change to the project.
There are a few ways around this:
Easiest way is to just go into a file and press the space bar at the end of a line. It will add an extra byte to your file, but not to the project (compiler is "smart" and gets rid of unused files, classes, and characters). Since this is not a common thing, it shouldn't be an issue
Project->Clean.... That will force your workspace to rebuild and, in most cases, will also recompile your project
If #2 is failing, first delete bin-debug or whatever you are using as your debug folder, then run Project->Clean...
It's a tad bit annoying (especially when editing external libraries), but it allows for quicker re-launches of the debugger, which is the ultimate goal of that behavior.

expect script running out of sync?

I'm currently modifying a script used to backup cisco ACE modules' contexts & crypto files. it works absolutely beautifully with one device. however, when i use it on another module, it seems to go completely out of sync and it messes up the script.
From what I can see, the differences are in the presence of a line that the ACE module throws up as so: Warning: Permanently added '[x.x.x.x]' (RSA) to the list of known hosts.\r\r\n this just seems to throw the rest of the script off, even though none of my expect statements are even looking for this!
I've had nothing but nightmares with expect and the way in which it interprets information from ace modules; can anyone shed any light on this issue or provide any advice as to how to make these devices behave when I try to script for them?
If you're handling one connection at a time, you should make sure you fully terminate one before opening the next. The simplest way of doing that is to put:
close
wait
At the end of the (foreach) loop over the things to connect to.
If you were doing multiple connections at once, you'd have to take care to use the -i option to various commands (notably expect, send and close) and make everything work right in addition to fixing the things I mentioned earlier. It can be done, but it's considerably more tricky and not worth it if you don't need the parallelism.

What do you think of developing for the command line first?

What are your opinions on developing for the command line first, then adding a GUI on after the fact by simply calling the command line methods?
eg.
W:\ todo AddTask "meeting with John, re: login peer review" "John's office" "2008-08-22" "14:00"
loads todo.exe and calls a function called AddTask that does some validation and throws the meeting in a database.
Eventually you add in a screen for this:
============================================================
Event: [meeting with John, re: login peer review]
Location: [John's office]
Date: [Fri. Aug. 22, 2008]
Time: [ 2:00 PM]
[Clear] [Submit]
============================================================
When you click submit, it calls the same AddTask function.
Is this considered:
a good way to code
just for the newbies
horrendous!.
Addendum:
I'm noticing a trend here for "shared library called by both the GUI and CLI executables." Is there some compelling reason why they would have to be separated, other than maybe the size of the binaries themselves?
Why not just call the same executable in different ways:
"todo /G" when you want the full-on graphical interface
"todo /I" for an interactive prompt within todo.exe (scripting, etc)
plain old "todo <function>" when you just want to do one thing and be done with it.
Addendum 2:
It was mentioned that "the way [I've] described things, you [would] need to spawn an executable every time the GUI needs to do something."
Again, this wasn't my intent. When I mentioned that the example GUI called "the same AddTask function," I didn't mean the GUI called the command line program each time. I agree that would be totally nasty. I had intended (see first addendum) that this all be held in a single executable, since it was a tiny example, but I don't think my phrasing necessarily precluded a shared library.
Also, I'd like to thank all of you for your input. This is something that keeps popping back in my mind and I appreciate the wisdom of your experience.
I would go with building a library with a command line application that links to it. Afterwards, you can create a GUI that links to the same library. Calling a command line from a GUI spawns external processes for each command and is more disruptive to the OS.
Also, with a library you can easily do unit tests for the functionality.
But even as long as your functional code is separate from your command line interpreter, then you can just re-use the source for a GUI without having the two kinds at once to perform an operation.
Put the shared functionality in a library, then write a command-line and a GUI front-end for it. That way your layer transition isn't tied to the command-line.
(Also, this way adds another security concern: shouldn't the GUI first have to make sure it's the RIGHT todo.exe that is being called?)
Joel wrote an article contrasting this ("unix-style") development to the GUI first ("Windows-style") method a few years back. He called it Biculturalism.
I think on Windows it will become normal (if it hasn't already) to wrap your logic into .NET assemblies, which you can then access from both a GUI and a PowerShell provider. That way you get the best of both worlds.
My technique for programming backend functionality first without having the need for an explicit UI (especially when the UI isn't my job yet, e.g., I'm desigining a web application that is still in the design phase) is to write unit tests.
That way I don't even need to write a console application to mock the output of my backend code -- it's all in the tests, and unlike your console app I don't have to throw the code for the tests away because they still are useful later.
I think it depends on what type of application you are developing. Designing for the command line puts you on the fast track to what Alan Cooper refers to as "Implementation Model" in The Inmates are Running the Asylum. The result is a user interface that is unintuitive and difficult to use.
37signals also advocates designing your user interface first in Getting Real. Remember, for all intents and purposes, in the majority of applications, the user interface is the program. The back end code is just there to support it.
It's probably better to start with a command line first to make sure you have the functionality correct. If your main users can't (or won't) use the command line then you can add a GUI on top of your work.
This will make your app better suited for scripting as well as limiting the amount of upfront Bikeshedding so you can get to the actual solution faster.
If you plan to keep your command-line version of your app then I don't see a problem with doing it this way - it's not time wasted. You'll still end up coding the main functionality of your app for the command-line and so you'll have a large chunk of the work done.
I don't see working this way as being a barrier to a nice UI - you've still got the time to add one and make is usable etc.
I guess this way of working would only really work if you intend for your finished app to have both command-line and GUI variants. It's easy enough to mock a UI and build your functionality into that and then beautify the UI later.
Agree with Stu: your base functionality should be in a library that is called from the command-line and GUI code. Calling the executable from the UI is unnecessary overhead at runtime.
#jcarrascal
I don't see why this has to make the GUI "bad?"
My thought would be that it would force you to think about what the "business" logic actually needs to accomplish, without worrying too much about things being pretty. Once you know what it should/can do, you can build your interface around that in whatever way makes the most sense.
Side note: Not to start a separate topic, but what is the preferred way to address answers to/comments on your questions? I considered both this, and editing the question itself.
I did exactly this on one tool I wrote, and it worked great. The end result is a scriptable tool that can also be used via a GUI.
I do agree with the sentiment that you should ensure the GUI is easy and intuitive to use, so it might be wise to even develop both at the same time... a little command line feature followed by a GUI wrapper to ensure you are doing things intuitively.
If you are true to implementing both equally, the result is an app that can be used in an automated manner, which I think is very powerful for power users.
I usually start with a class library and a separate, really crappy and basic GUI. As the Command Line involves parsing the Command Line, I feel like i'm adding a lot of unneccessary overhead.
As a Bonus, this gives an MVC-like approach, as all the "real" code is in a Class Library. Of course, at a later stage, Refactoring the library together with a real GUI into one EXE is also an option.
If you do your development right, then it should be relatively easy to switch to a GUI later on in the project. The problem is that it's kinda difficult to get it right.
Kinda depends on your goal for the program, but yeah i do this from time to time - it's quicker to code, easier to debug, and easier to write quick and dirty test cases for. And so long as i structure my code properly, i can go back and tack on a GUI later without too much work.
To those suggesting that this technique will result in horrible, unusable UIs: You're right. Writing a command-line utility is a terrible way to design a GUI. Take note, everyone out there thinking of writing a UI that isn't a CLUI - don't prototype it as a CLUI.
But, if you're writing new code that does not itself depend on a UI, then go for it.
A better approach might be to develop the logic as a lib with a well defined API and, at the dev stage, no interface (or a hard coded interface) then you can wright the CLI or GUI later
I would not do this for a couple of reasons.
Design:
A GUI and a CLI are two different interfaces used to access an underlying implementation. They are generally used for different purposes (GUI is for a live user, CLI is usually accessed by scripting) and can often have different requirements. Coupling the two together is not a wise choice and is bound to cause you trouble down the road.
Performance:
The way you've described things, you need to spawn an executable every time the GUI needs to d o something. This is just plain ugly.
The right way to do this is to put the implementation in a library that's called by both the CLI and the GUI.
John Gruber had a good post about the concept of adding a GUI to a program not designed for one: Ronco Spray-On Usability
Summary: It doesn't work. If usability isn't designed into an application from the beginning, adding it later is more work than anyone is willing to do.
#Maudite
The command-line app will check params up front and the GUI won't - but they'll still be checking the same params and inputting them into some generic worker functions.
Still the same goal. I don't see the command-line version affecting the quality of the GUI one.
Do a program that you expose as a web-service. then do the gui and command line to call the same web service. This approach also allows you to make a web-gui, and also to provide the functionality as SaaS to extranet partners, and/or to better secure the business logic.
This also allows your program to more easily participate in a SOA environement.
For the web-service, don't go overboard. do yaml or xml-rpc. Keep it simple.
In addition to what Stu said, having a shared library will allow you to use it from web applications as well. Or even from an IDE plugin.
There are several reasons why doing it this way is not a good idea. A lot of them have been mentioned, so I'll just stick with one specific point.
Command-line tools are usually not interactive at all, while GUI's are. This is a fundamental difference. This is for example painful for long-running tasks.
Your command-line tool will at best print out some kind of progress information - newlines, a textual progress bar, a bunch of output, ... Any kind of error it can only output to the console.
Now you want to slap a GUI on top of that, what do you do ? Parse the output of your long-running command line tool ? Scan for WARNING and ERROR in that output to throw up a dialog box ?
At best, most UI's built this way throw up a pulsating busy bar for as long as the command runs, then show you a success or failure dialog when the command exits. Sadly, this is how a lot of UNIX GUI programs are thrown together, making it a terrible user experience.
Most repliers here are correct in saying that you should probably abstract the actual functionality of your program into a library, then write a command-line interface and the GUI at the same time for it. All your business logic should be in your library, and either UI (yes, a command line is a UI) should only do whatever is necessary to interface between your business logic and your UI.
A command line is too poor a UI to make sure you develop your library good enough for GUI use later. You should start with both from the get-go, or start with the GUI programming. It's easy to add a command line interface to a library developed for a GUI, but it's a lot harder the other way around, precisely because of all the interactive features the GUI will need (reporting, progress, error dialogs, i18n, ...)
Command line tools generate less events then GUI apps and usually check all the params before starting. This will limit your gui because for a gui, it could make more sense to ask for the params as your program works or afterwards.
If you don't care about the GUI then don't worry about it. If the end result will be a gui, make the gui first, then do the command line version. Or you could work on both at the same time.
--Massive edit--
After spending some time on my current project, I feel as though I have come full circle from my previous answer. I think it is better to do the command line first and then wrap a gui on it. If you need to, I think you can make a great gui afterwards. By doing the command line first, you get all of the arguments down first so there is no surprises (until the requirements change) when you are doing the UI/UX.
That is exactly one of my most important realizations about coding and I wish more people would take such approach.
Just one minor clarification: The GUI should not be a wrapper around the command line. Instead one should be able to drive the core of the program from either a GUI or a command line. At least at the beginning and just basic operations.
When is this a great idea?
When you want to make sure that your domain implementation is independent of the GUI framework. You want to code around the framework not into the framework
When is this a bad idea?
When you are sure your framework will never die