Are there any opensource project to build reusable GUI components in QT4 with extra connection informaition saved in a JSON file? Qt designer lets you build a dialog and connect signals and slots together and save the results as a UI file. I am looking for a project that would extend this to build components that you could easily plugin into a c++ or pyside application. An example would be to build a playcontrol for a movie player with all the start, stop, rewind, fastforward buttons. Then in the application you just load the UI file or perhaps a JSON file with extra input and outputs for callbacks.
leeg, what you are talking about doing with JSON is fairly extensive. I have to agree with Stefan's comment that modeling your custom widgets as subclasses of QWidget is the correct approach.
You can easily drop in the classes, so to speak, into any Qt GUI application. I use this technique frequently when creating new GUI elements not already present in the standard Qt GUI library. You can set layouts (vertical and horizontal) and then nest widgets in the widgets to design complexly behaving widgets. From there, you can create custom slots and signals for handling different events.
For example, you mentioned a player control for a movie player with a start, stop, rewind, and fast forward button. You would need to create a containing widget and then add a horizontal layout to your containing widget. Then you can add buttons for start, stop, rewind, and fast forward. If you want images for your buttons, you can set the button images on the buttons themselves. Then you could emit signals for start, stop, rewind, and fast forward. In the calling object, you would then connect slots to each respective signal.
This skeleton should help you get started:
#include <QtCore>
#include <QtGui>
class PlayerControl : public QWidget
{
Q_OBJECT
signals:
void start();
void stop();
void rewind();
void fastForward();
private slots:
void startClicked();
void stopClicked();
void rewindClicked();
void fastForwardClicked();
public:
PlayerControl(QWidget *parent);
~PlayerControl();
private:
QPushButton startButton;
QPushButton stopButton;
QPushButton rewindButton;
QPushButton fastForwardButton;
}
void PlayerControl::PlayerControl(QWidget *parent)
{
// do setup here
connect(startButton, SIGNAL(clicked()), this, SLOT(startClicked()));
connect(stopButton, SIGNAL(clicked()), this, SLOT(stopClicked()));
connect(rewindButton, SIGNAL(clicked()), this, SLOT(rewindClicked()));
connect(fastForwardButton, SIGNAL(clicked()), this, SLOT(fastForwardClicked()));
}
void PlayerControl::~PlayerControl()
{
// clean up
}
void PlayerControl::startClicked()
{
emit start();
}
void PlayerControl::stopClicked()
{
emit stop();
}
void PlayerControl::rewindClicked()
{
emit rewind();
}
void PlayerControl::fastForwardClicked()
{
emit fastForward();
}
Related
Could someone please help me understand how I can properly dispose a Box2D World and Debug Renderer?
I have a playscreen that has a world and renderer and I would like to dispose of these when I change to another screen as I do not need them anymore. I have included in my playscreen dispose with the following, and called this manually when an event has been triggered to change screen. At the moment by calling these dispose() methods my game is crashing. Must a game have to have a Box2D world and renderer at all times? What would 'EXCEPTION_ACCESS_VIOLATION' mean?
#Override
public void dispose() {
System.out.println("PlayScreen disposed.");
world.dispose();
b2dr.dispose();
...
}
From my experience there are (at least) two situations where you can get an EXCEPTION_ACCESS_VIOLATION in libGDX-box2d:
world.dispose is called during world.step
world.dispose is called from a different thread
Say I have a method that loads all my assets for a Screen, this method is called in that Screens constructor:
public void load(){
manager.load(pd_bg, Texture.class, textureParams);
}
Now when I exit that Screen, I have e method that unloads all these assets:
public void unLoad(){
manager.unload(pd_bg);
}
Inside my Screen I might use this asset for a Sprite, like so:
Sprite bg = new Sprite(GdxAssetManager.manager.get(GdxAssetManager.pd_bg, Texture.class));
Finally, do I need to dispose of the texture used in this sprite even though I call the unLoad() method? i.e:
public void dispose(){
GdxAssetManager.unLoad();
bg.getTexture.dispose(); //Is this line needed?
}
I am also wondering, if I load all resources when I start the app, should I then unload resources when I exit a Screen? How will they be loaded up next time then (since I only load the on launch)?.
I am using a Sprite as an example, but I guess the answer will be true for any asset.
No, you don't have to (and must not) dispose it when you unload it. In general, the rule of thumb goes: if you create it, you destroy it. So if you create a Texture (by using the new keyword) then you own that texture and are responsible for destroying it (calling its dispose method). In the case of AssetManager, it is the AssetManager who owns the resource and is responsible for destroying it.
To keep track of which resources needs to be created and destroyed, AssetManager uses reference counting. So it is important that you eventually call unload for everytime you call load.
And because you created the AssetManager using the new keyword, you own it and you are responsible for calling its dispose method:
public class MyGame extends ApplicationAdapter {
public AssetManager assetManager;
#Override public void create() {
assetManager = new AssetManager();
...
}
...
#Override public void create() {
assetManager.dispose();
assetManager = null;
...
}
}
Have a look at the documentation: https://github.com/libgdx/libgdx/wiki/Managing-your-assets
Btw, in your code it looks like you are using a static AssetManager. Although not related to your question, be aware that will lead to issues. So I'd advice you to implement a proper object oriented design instead of making things static.
As for your second question, it is unclear what you mean. If you mean when you should call AssetManager#unload, then the answer is whenever the class that called the corresponding AssetManager#load method no longer needs it.
For example, if you have an asset named "image.png" and you call the assetManager.load("image.png", Texture.class) in both your MyGame and MyScreen classes then you should call assetManager.unload("image.png") also in both your MyGame and MyScreen class.
To answer your last two questions:
In your launcher class, you should load every texture you need from all screens.
Then in your screens you aceess the images you need and you dont unload them and you also don't dispose them nor do you dispose the assetmanager.
Then in your launcher class in the
dispose(){}
method you first unload evereything and then call
assetmanager.dispose();
I am now moving from my code behind xaml dp binding to using CreateBindingSet, as I believe it will be easier to maintain on long run. Previously to confirm that I haven't missed any binding, I had a Windows Phone Test project with a generic test routine - that would parse a view for all the controls, and confirm that each has a correct binding. I did this using
element.GetBindingExpression(dependencyProperty) // from System.Windows
and that's worked beautifully - validating all my views.
But now as I am changing over, all these tests are failing. Has anyone any suggestions on how I can test same thing with when the binding is applied using CreateBindingSet and .Apply.
Thanks in advance.
Rana
Reasoning behind the Madness
Being a lazy sod, I dream of a day where my View would be shared across all platforms; until then, the following would do (I have most in place and working)
BoilerPlate class that would be shared between all platforms:
#if __IOS
... needed namespaces
#else ....
public partial class FirstView
{
private new FirstViewModel ViewModel
{
get { return (FirstViewModel)base.ViewModel; }
}
private void CommonBinding()
{
var set = this.CreateBindingSet<FirstView, FirstViewModel>();
// do common bindings
set.Bind(PageText).For(c => c.Text).To(vm => vm.PageText).OneTime();
set.Apply();
}
}
Then View in Touch would be:
public partial class FirstView : MvxViewController
{
public override void LoadView()
{
// create
}
public override ViewDidLoad()
{
CommonBinding();
}
}
In theory, Views in other platform would be almost similar; just different inheritance (MvxActivity with OnCreate, and OnViewModelSet)/(MvxPhonePage with xaml/alternative, and Loaded Event for binding).
Finally, a common testing way to ensure that all the items has binding set somehow. In my mind, until autoview is supported in wp8, it's just the way to have as much shared code as possible.
I have just started on droid, and trying to make the layout compatible with xibFree, which I have already used in my touch project. If this works, I can then have shared layout between droid and touch (perhaps I should be looking at autoView anyway)
I'm personally not sure what value these tests add that much value to your application - these feel like they are operating at a level of "duplicating code" rather than actually testing any functionality.
However, this is very much down to personal opinion, and if you do want this level of testing, then I think you could do this by:
Inherit a class from https://github.com/MvvmCross/MvvmCross/blob/v3.1/Cirrious/Cirrious.MvvmCross.BindingEx.WindowsPhone/WindowsBinding/MvxWindowsBindingCreator.cs
Override ApplyBinding so that you can capture at test time the calls made for each element
Register this class with ioc as the IMvxBindingCreator in your test harness
Use the captured lists of binding requests in your tests
This is related to a similar question I just asked; however, this one is specifically tailored to my individual project, rather than object-oriented programming in general.
I am working on a version of hangman with some interesting programming twists. I don't need to go into detail of what they are as the logic for the game is already finished. I can run an entire game by hard-coding variables for the user input (such as guess selection). I am now in the process of replacing all those bits that require user interaction with the trappings of an actual game like buttons, images, sounds, etc.
I am trying to figure out whether it is better to have all of this stuff be part of my main class, or whether I should create another class to handle it all. For example, I want my players to be able to click on an on-screen keyboard to make their guess, with each button firing a separate event listener call to the makeGuess function. Would it be better to create the buttons as direct children of my main game class, or should I create a subclass (called Keyboard, for example) that creates the keyboard section of the board with the appropriate events, then add the keyboard class as a child to the main rather than all the pieces? What are the pros and cons of each of these choices?
For the record, I'm programming using FlashDevelop, so nothing like a timeline for me.
I say you'd better create at least the Keyboard class that will parse the events fired by tapping/clicking keys inside, and give it a callback reference to your Main class, or GameLogic class, so that it can do theMain.guess(letter); and then the Main class logic will come to life and process the callback. Since this structure is not exactly related to game logic, and technically it can then be reused by implementing an interface for the callback, so that you can use this keyboard elsewhere where you want to have your player to type letters using mouse, it's better be separated from main logic.
public class Keyboard extends Sprite {
public var callback:AcceptingKeys; // an interface
... // class implementation, with all the listeners, children and stuff
// and you call in there: callback.acceptKey(key);
}
public interface AcceptingKeys {
public function acceptKey(key:String):void; // or whatever type you need
}
And you do with your Main class:
public class Main extends Sprite
implements AcceptingKeys {
...
var keyboard:Keyboard;
private function init(e:Event=null):void {
... // other code. It's FD, then this function exists
keyboard=new Keyboard();
keyboard.callback=this;
// past this point your instances can talk
}
public function acceptKey(key:String):void {
// match interface description
... // do game logic for parsing a key
}
}
My Swing application prints lines of text to a JTextPane inside of a JScrollPane when a JButton is pressed. For quick operations there is no issue. However, some JButtons invoke operations that may take a few minutes. The button remains greyed out during this time.
What currently happens is that the text is "batched up" and then I get hundreds of lines all at once at the end of the operation at the same moment the button becomes un-greyed. The problem is that I would like the text being appended to the document displayed in the JTextPane to appear sooner (at the moment it is appended) rather than at the time the entire operation completes. This would create a better user experience.
What am I doing wrong?
Use a SwingWorker for performing your background operation.
// Your button handler
public void actionPerformed(ActionEvent e) {
(new SwingWorker<Void, String>() {
public Void doInBackground() {
// perform your operation
// invoke publish("your string");
}
protected void process(List<String> chunks) {
// append your string to the scroll pane
}
}).execute();
}
You are invoking code directly from within the AWT-Thread which blocks every event. The solution is to put your long-running code in a separate Thread. As your code is executed and obtains results, you notifiy your view (using the observer/observable pattern).As your view is notified, you update the scrollpane content.
You must also verify if you are running in the AWT-Thread or not (SwingUtilities.isEventDispatchThread()). If you are not, then you need to dispatch the update of the view in the AWT-Thread using SwingUtilities.invokeLater() because Swing is not Thread-safe.