Does anyone know if there is a media formatter out there to support the jsonapi.org spec (application/vnd.api+json).
If not has anyone started (or looking at starting) a project to implement this?
For the record, as of today the answer seems to be no. Best I could find was this guy: http://www.emadibrahim.com/2014/04/09/emberjs-and-asp-net-web-api-and-json-serialization/ and that only tackles a tiny part of the problem.
I've been trying this for a while…unfortunately I tried to make something that was really smart and would automagically handle a data model from Entity Framework with almost no work. I was getting close to thinking about releasing it...and then I found out they changed a bunch of stuff in EF 6 (all models are supposed to be POCOs and the context is now a DbContext instead of an ObjectContext) and I'm probably going to have to essentially start over…which is why I started looking again to see if someone else was doing it and found your question.
Let me know if you're still looking for this, and I'll keep you updated.
UPDATE
I've just published a codeplex project that aims to provide exactly what I've described above. It's called JSONAPI.NET and while it's very early, it already does quite a bit. Documentation is sparse, and I don't have NuGet packages yet…but take a look everyone and see if it's useful for you. Feedback is encouraged! You can contact me from the project page in the link.
This question already has answers here:
Running JUnit Tests in Parallel in IntelliJ IDEA
(3 answers)
Closed 5 years ago.
Is it possible to run junit tests in intelliJ in parallel? If so, how do i do this?
I set the "fork" parameter to class level and this didn't do anything - actually, it made everything a bit slower, so i'm unsure what "fork" does that is beneficial?
Is it possible to do this just using intelliJ, or do i need some fancy test framework and all the hoo-hah that that would involve?
Finally, assuming this is at all possible, can one control the number of forks or threads or whatever they want to call it?
UPDATE: somebody has linked to a question that might answer this. I looked at that question prior to posting - I'm unsure what that question really "answers". It simply says there is an issue tracker and that this issue has been implemented in intelliJ. I don't see how to implement it anywhere.
UPDATE: What does "didn't do anything" mean?: it just makes things slower, which isn't v. useful. I mean, maybe your tests run blazingly quickly and you want to slow them down to appreciate some Bach? That is cool. I just want mine to run faster, I'm fed up of Bach.
You can make use of the junit-toolbox. This is an extension library for jUnit that is listed on the jUnit site itself.
This extension offers the ParallelSuite. Through this you can create with nearly no effort an AllTest class that executes the tests in parallel. The minimum AllTest could look like the code below, using the pattern feature introduced with junit-toolbox.
#RunWith(ParallelSuite.class)
#SuiteClasses("**/*Test.class")
public class AllTests {}
This will create as many threads for parallel execution as your JVM reports via availableProcessors. To override this you may set the system property maxParallelTestThreads.
I am following example by José F. Romaniello on session management with NHibernate. It's a very good article, however I'm struggling with it having very little experience with NHibernate, Windsor and MVC.
I am trying to re-create NHibernateInstaller, however encountering the following error: Component Castle.TypedFactory.DefaultInterfaceFactoryComponentSelector could not be resolved. Make sure you didn't misspell the name, and that component is registered.
In the sample project provided this error does not crop up, even though the installer is identical and Google does not come up with any results (which is very unusual). What causes this and how can it be avoided?
it seems a problem with the TypedFactoryFacility... are you doing this?
kernel.AddFacility<TypedFactoryFacility>();
before running all the installers?
uncomment the following code in Bootstrapper.cs file.
container.AddFacility();
This happened to me when I created my own implementation of ITypedFactoryComponentSelector, but forgot to register the selector itself.
There was no indication this was the actual problem (and the kernel debug information assured me the components can be resolved) - but registering it fixed the issue.
Hope this helps someone :-)
I want to test validation logic in a legacy class. The class uses a method to load effective dates from a config file.
I have written a subclass of the class in question and overridden the config method so I can run my unit test against the subclass with any combination of effective dates.
Is this an appropriate strategy? It strikes me as a clean technique for testing code that you don't want to mess with.
I like it, its the most simple and straight forward way to get this done. And since it is a legacy class, it will not change anymore, so you don't run danger of bumping into the fragile base class problem neither.
It seems to be an appropriate strategy to me. Ofcourse with this override you won't
be able to test the code (in the original class) that loads the config data, but if you have other tests to cover this sceario then I think the approach you outlined is fine.
Just as the title said. What ways do you use to test your own code so that it wouldn't be a boring task? Do you use any tool? For my projects, I use a spreadsheet to list all the possible routines i.e. from the basic CRUD and also all the weird routines. i make about 10 routines.
I get about 2-3 bugs and sometimes major ones by doing this. And if i'm not doing this the client reports another bug.
So tell me what technique do you use in testing your own code in such a way that it doesn't bore you?
Edit:
I forgot to mention that i am particularly working on web based apps and my language is PHP & Cakephp framework.
Have fast tests. The (more) immediate feedback helps to acchieve short iterations. This can almost make you addicted to starting the next test run.
If you find testing boring this is because testing your code is a necessary evil... least is how I perceived you see it.
All you need here is a change in your point of view towards testing... and more specifically... a change in HOW you are testing. You love programming a lot more than testing... well program your tests... then it is just as fun as programming the thing to begin with... and when you are done you have
a program that works
a test suite that remains and test it every builds
So leave that excel sheet and step by step debugger and join the fun :-)
Of course there is more to that and this where test frameworks (junit, testNG, Dunit, NUnit ...) will come in handy, they will take the little pains away and only leave the coding part of the test..
Happy coding and by extension.. happy testing :-)
Few references you may find useful, I am not a PHP expert, far from it but it seemed to fit the purpose.
http://www.simpletest.org/
http://www.phpunit.de/
http://laughingmeme.org/2003/08/05/a-few-tips-for-writing-useful-libraries-in-php/
I used to think the same as you. When I first started programming, we had to work out what the output would be on paper and then do visual comparisons of the actual and expected output. Talk about tedious. A couple of years ago, I discovered Test Driven Development and xUnit and now I love tests.
Basically, in TDD, you have a framework designed to allow you to write tests and run them very easily. So, writing tests just becomes writing code. The process is:
Just write enough to allow you to write a test. E.g you're adding a method to a class, so you just write the method sig and any return statement needed to get it to compile.
Then you write your first test and run the framework to see that it fails.
Then you add code to/refactor your method to get the test to pass.
Then you add the next test and see that it fails.
Repeat 3 and 4 until you can't think of any more tests.
You've finished.
That's one of the nice things about TDD: once your code passes every test you can think of, you know you're finished - without TDD, sometimes it's difficult to know when to stop. Where do your tests come from? They come from the spec. TDD often helps you to realise that the spec. is full of holes as you think of test cases for things that weren't in the spec. You can get these questions answered before you start writing the code to deal with them.
Another nice thing is that when you discover a bug later, you can start reworking your code safe in the knowledge that all of the existing tests will prove your code still works for all the known cases, whilst the new tests you've written to recreate the bug will show you when you've fixed it.
You can add unit tests to existing code - just add them for the bits you're changing. As you keep coming back to it, the tests will get more and more coverage.
xUnit is the generic name for a bunch of frameworks that support different languages: JUnit for Java, NUnit for .NET, etc. There's probably already one for whatever language you use. You can even write your own framework. Read this book - it's excellent.
For new code, work out what the code should do, write a test that asserts that the code does it, work out how to do it, then write the code.
For finding bugs in existing code, a test which reproduces the bug makes it easier to test.
This isn't boring, because in both cases the tests have a high likelihood of failure.
For UAT, then I haven't found any non-boring way - you go through the requirements one by one and make as many tests are required for the functionality. Ideally for new projects, that would have been mostly done up-front as part of the elaboration, but not always. It's only when you're writing tests after the fact that you have to a long list of tests which you already know will pass that it gets boring.
I dont see how it can be boring since it's a large part of the programming itself. Finding and removing bugs is very important, but if you think it's boring maybe you would rather write code in which case you can write a few lines that test critical parts in your code.
Use a test first approach \ pair programming test first.
If you writing them after you have written your own code, then your target is to find mistakes in your work = sad target.
Conversely, if you write your tests before you code, then your target is to write flawless software = happy target.
You probably mean tedious, rather than boring.
If so, this article may help
"No testing, no boring."
Write automatic unit tests, with PhpUnit or Simpletest since you're using PHP, or any other unit-testing framework available for your language of choice. Following Test-Driven Development (TDD), you will build a test suite along with your code. You won't have the impression you're testing anything. Really.
"* test a little, code a little*".
One of the advices I give to my team is that concerning a new features 90% of the logic should run out of the context of the application.
Features that can run outside of the application context are always easy to test.
If you are using .net, you can investigate NUnit.
You can also look at Pex. It seems to be an amazing test framework.
However, your question is a little generic because there are a lot testing types.
Have fun testing :).
I try to write my Tests first and try to design the class around it. So i am really test focussed. I am using JUnit etc.
If you try Programming in that way..testing becomes more and more fun, from my point of view.
I work for a small company yet we have a separate test team. This is because developers are often blind for their own errors, thus they tend to be bad testers.
Our test team is made up of experienced Test Engineers who work according to predefined test-plans and who often use automated test-tools to test the applications we create. (Including websites!) They are not developers!
These testers use TMap for the automated testing. The rest is just manual labor, reading the functional designs and making sure that whatever is mentioned in the functional design will work exactly as described in the final version.
Any errors are reported back to the developers by using an internal bug reporting tool.
Write some unit tests/automated tests, which will run automatically e.g. after a new build has been done.
Use encapsulation and try to test against interfaces only.
Write some small tools to help you test your modules/classes.
Making an easy to use, test suite is easy to do for Perl programs. There is a standard way to do testing in Perl using the Test Anything Protocol.
Basically you write a bunch of files with the .t extension, in the t/ directory of your project, and then run prove.
The files in t/ basically look like this:
#!/usr/bin/perl
use strict;
use warnings;
use Test::More tests => 8;
use Date::ICal;
$ical = Date::ICal->new( year => 1964, month => 10, day => 16,
hour => 16, min => 12, sec => 47,
tz => '0530' );
ok( defined $ical, 'new() returned something' );
ok( $ical->isa('Date::ICal'), " and it's the right class" );
is( $ical->sec, 47, ' sec()' );
is( $ical->min, 12, ' min()' );
is( $ical->hour, 16, ' hour()' );
is( $ical->day, 17, ' day()' );
is( $ical->month, 10, ' month()' );
is( $ical->year, 1964, ' year()' );
For more information you can read the tutorial.
There are many languages which have modules designed to work with The TAP, have a look here for more information.
Unfortunately, TAP has only recently been used for other languages than Perl, so there isn't as much support for them, as there exists for Perl.
Do not write tests for trivial stuff - at least not until it breaks i.e. on rare occasion. If you do then you will feel discomfort every time you need to come and maintain those tests. It's absolutely normal, boredom laziness frustration etc. is your natural instinct reaction to pointless work.
Quite opposite, writing tests for non-trivial algorithms & logic, discovering corner cases which you didn't even think about is actually fun and very rewarding experience.