JUnit doesn't discover classes within a folder with --select-directory - junit

I tried to run with junit5 with stand alone runner the following command:
java -jar ex1/lib/junit-platform-console-standalone-1.6.0-M1.jar --select-directory="./ex1/test"
with the following hierarchy:
.
├── ex1
│ ├── lib
│ │ ├── apiguardian-api-1.1.0.jar
│ │ ├── junit-jupiter-api-5.6.0-M1.jar
│ │ ├── junit-platform-commons-1.6.0-M1.jar
│ │ ├── junit-platform-console-standalone-1.6.0-M1.jar
│ │ └── opentest4j-1.2.0.jar
│ ├── mavnat.iml
│ ├── out
│ │ ├── production
│ │ │ └── mavnat
│ │ │ ├── AVLTree$AVLNode.class
│ │ │ ├── AVLTree.class
│ │ │ ├── AVLTree$DeletionBalancer.class
│ │ │ ├── AVLTree$IAVLNode.class
│ │ │ ├── AVLTree$InsertionBalancer.class
│ │ │ ├── AVLTree$Rotations.class
│ │ │ └── TreePrinter.class
│ │ └── test
│ │ └── mavnat
│ │ ├── ActualAVLTree.class
│ │ ├── ActualAVLTree$IAVLNode.class
│ │ ├── AVLSanitizer.class
│ │ ├── AVLTreeTest.class
│ │ ├── AVLTreeTestExternal.class
│ │ ├── DeletionTest.class
│ │ ├── ExTester$10.class
│ │ ├── ExTester$11.class
│ │ ├── ExTester$12.class
│ │ ├── ExTester$13.class
│ │ ├── ExTester$14.class
│ │ ├── ExTester$1.class
│ │ ├── ExTester$2.class
│ │ ├── ExTester$3.class
│ │ ├── ExTester$4.class
│ │ ├── ExTester$5.class
│ │ ├── ExTester$6.class
│ │ ├── ExTester$7.class
│ │ ├── ExTester$8.class
│ │ ├── ExTester$9.class
│ │ ├── ExTester.class
│ │ ├── InsertionTest.class
│ │ ├── META-INF
│ │ │ └── mavnat.kotlin_module
│ │ ├── RotationsTest.class
│ │ ├── SplitTest.class
│ │ ├── SuccessStatus.class
│ │ ├── TesterUtils.class
│ │ ├── Tests.class
│ │ └── TestUtils.class
│ ├── pro-1.docx
│ ├── src
│ │ ├── AVLTree.java
│ │ └── TreePrinter.java
│ └── test
│ ├── AVLSanitizer.java
│ ├── AVLTreeTestExternal.java
│ ├── AVLTreeTest.java
│ ├── DeletionTest.java
│ ├── InsertionTest.java
│ ├── JoinTest.java
│ ├── RotationsTest.java
│ ├── SplitTest.java
│ └── TestUtils.java
└── ex2
Unfortunately, it seems that Junit launcher doesn't discover the tests.
╷
├─ JUnit Jupiter ✔
└─ JUnit Vintage ✔
Test run finished after 29 ms
[ 2 containers found ]
[ 0 containers skipped ]
[ 2 containers started ]
[ 0 containers aborted ]
[ 2 containers successful ]
[ 0 containers failed ]
[ 0 tests found ]
[ 0 tests skipped ]
[ 0 tests started ]
[ 0 tests aborted ]
[ 0 tests successful ]
[ 0 tests failed ]
Does anyone know why tests aren't found?
I would expect all the tests to be run and get info about failed tests.
edit: verbose run:
Thanks for using JUnit! Support its development at https://junit.org/sponsoring
Test plan execution started. Number of static tests: 0
╷
├─ JUnit Jupiter
└─ JUnit Jupiter finished after 7 ms.
├─ JUnit Vintage
└─ JUnit Vintage finished after 2 ms.
Test plan execution finished. Number of all tests: 0
Test run finished after 42 ms
[ 2 containers found ]
[ 0 containers skipped ]
[ 2 containers started ]
[ 0 containers aborted ]
[ 2 containers successful ]
[ 0 containers failed ]
[ 0 tests found ]
[ 0 tests skipped ]
[ 0 tests started ]
[ 0 tests aborted ]
[ 0 tests successful ]
[ 0 tests failed ]

JUnit Jupiter (shipped with "JUnit 5") and JUnit Vintage (executes "JUnit 3+4" tests) don't support the --select-directory option offered by the JUnit Platform (part of "JUnit 5"). Both engines only support class-based test selection, which means that test classes must be available on the class or module path at runtime.
Using paths from your directory tree, this standalone command should work:
java
-jar junit-platform-console-standalone-${VERSION}.jar
--class-path ex1/out/test
--class-path ex1/out/production
--scan-class-path
Assuming that you compiled all sources within IntelliJ IDEA.
Here's a hint how to compile your production/tests sources on the command line: How to launch JUnit 5 (Platform) from the command line (without Maven/Gradle)?

Related

How to serve multiple HTML pages using Wasm with Rust?

I am trying to build a web application running wasm on the client side, and I am wondering what would be a good way of serving multiple pages.
My main concern is performance, because I would like to split the application up into contextual chunks instead of having one page with all of the content.
I am using Rust with the Yew framework for the client side, and so far I have only used yew_router to handle different routes on the client side, but they all work from the same HTML page and Wasm module. Now I would like to be able to serve different HTML pages with separate Wasm modules, and I am unsure how to realize this. Do I really have to write a dedicated crate for each page and compile those to individual Wasm modules, so I can serve them individually? Or is there some way I can compile one rust crate to multiple Wasm modules?
My current project structure looks like this:
.
├── Cargo.toml // workspace manifest
├── client
│   ├── Cargo.toml
│   ├── favicon.ico
│   ├── index.html
│   ├── main.js
│   ├── Makefile
│   ├── pkg // built using wasm-pack and rollup
│ │ ├── bundle.js
│ │ ├── client_bg.d.ts
│ │ ├── client_bg.wasm
│ │ ├── client.d.ts
│ │ ├── client.js
│ │ ├── package.json
│ │ └── snippets
│   ├── src
│   ├── statics
│   ├── styles
│   └── vendor
├── server
│   ├── Cargo.toml
│   └── src
└── ... other crates
and I run the server inside client/ where it responds with index.html to any incoming GET requests. The index.html links to the bundle.js in pkg/ which sets up the client_bg.wasm module.
Now I basically want to do this with another HTML page, with another Wasm module, preferably from the same Yew App in the client crate.
Thank you
Currently it's impossible to generate two wasm files from one crate. The solution is to use one create per wasm file and then let another build system to put output to correct places.

how to let spark 2.0 reading mutli folders parquet like csv

I have some daily data to save to multi folders(mostly based on time). now I have two format to store the files one is parquet and the other is csv , I would like to save to parquet format to save some space.
the folder structure is like following :
[root#hdp raw]# tree
.
├── entityid=10001
│   └── year=2017
│   └── quarter=1
│   └── month=1
│   ├── day=6
│   │   └── part-r-00000-84f964ec-f3ea-46fd-9fe6-8b36c2433e8e.snappy.parquet
│   └── day=7
│   └── part-r-00000-84f964ec-f3ea-46fd-9fe6-8b36c2433e8e.snappy.parquet
├── entityid=100055
│   └── year=2017
│   └── quarter=1
│   └── month=1
│   ├── day=6
│   │   └── part-r-00000-84f964ec-f3ea-46fd-9fe6-8b36c2433e8e.snappy.parquet
│   └── day=7
│   └── part-r-00000-84f964ec-f3ea-46fd-9fe6-8b36c2433e8e.snappy.parquet
├── entityid=100082
│   └── year=2017
│   └── quarter=1
│   └── month=1
│   ├── day=6
│   │   └── part-r-00000-84f964ec-f3ea-46fd-9fe6-8b36c2433e8e.snappy.parquet
│   └── day=7
│   └── part-r-00000-84f964ec-f3ea-46fd-9fe6-8b36c2433e8e.snappy.parquet
└── entityid=10012
└── year=2017
└── quarter=1
└── month=1
├── day=6
│   └── part-r-00000-84f964ec-f3ea-46fd-9fe6-8b36c2433e8e.snappy.parquet
└── day=7
└── part-r-00000-84f964ec-f3ea-46fd-9fe6-8b36c2433e8e.snappy.parquet
now I have a python list stores all the folders need to be read,suppose each time run it need to read only some of the folders base on filter conditions.
folderList=df_inc.collect()
folderString=[]
for x in folderList:
folderString.append(x.folders)
In [44]: folderString
Out[44]:
[u'/data/raw/entityid=100055/year=2017/quarter=1/month=1/day=7',
u'/data/raw/entityid=10012/year=2017/quarter=1/month=1/day=6',
u'/data/raw/entityid=100082/year=2017/quarter=1/month=1/day=7',
u'/data/raw/entityid=100055/year=2017/quarter=1/month=1/day=6',
u'/data/raw/entityid=100082/year=2017/quarter=1/month=1/day=6',
u'/data/raw/entityid=10012/year=2017/quarter=1/month=1/day=7']
the files were writen by :
df_join_with_time.coalesce(1).write.partitionBy("entityid","year","quarter","month","day").mode("append").parquet(rawFolderPrefix)
when I try to read the folders stored in folderString by df_batch=spark.read.parquet(folderString) error java.lang.ClassCastException: java.util.ArrayList cannot be cast to java.lang.String encounters.
if I save the files in csv format and read it through below code it just works fine as following: please if anyway to read the filelist for parquet folder ,much appreciate!
In [46]: folderList=df_inc.collect()
...: folderString=[]
...:
...: for x in folderList:
...: folderString.append(x.folders)
...: df_batch=spark.read.csv(folderString)
...:
In [47]: df_batch.show()
+------------+---+-------------------+----------+----------+
| _c0|_c1| _c2| _c3| _c4|
+------------+---+-------------------+----------+----------+
|6C25B9C3DD54| 1|2017-01-07 00:00:01|1483718401|1483718400|
|38BC1ADB0164| 3|2017-01-06 00:00:01|1483632001|1483632000|
|38BC1ADB0164| 3|2017-01-07 00:00:01|1483718401|1483718400|
You are facing a miss understanding of partition in Hadoop and Parquet.
See, I have a simple file structure partitioned by year-month. It is like this:
my_folder
.
├── year-month=2016-12
| └── my_files.parquet
├── year-month=2016-11
| └── my_files.parquet
If I make a read from my_folder without any filter in my dataframe reader like this:
df = saprk.read.parquet("path/to/my_folder")
df.show()
If you check the Spark DAG visualization you can see that in this case it will read all my partitions as you said:
In the case above, each point in the first square is one partition of my data.
But if I change my code to this:
df = saprk.read.parquet("path/to/my_folder")\
.filter((col('year-month') >= lit(my_date.strftime('%Y-%m'))) &
(col('year-month') <= lit(my_date.strftime('%Y-%m'))))
The DAG visualization will show how many partitions I'm using:
So, if you filter by the column that is the partition you will not read all the files. Just that you need, you don't need to use that solution of reading one folder by folder.
I got this solved by :
df=spark.read.parquet(folderString[0])
y=0
for x in folderString:
if y>0:
df=df.union(spark.read.parquet(x))
y=y+1
it's a very ugly solution ,if you have good idea ,please let me know. many thanks.
few days later,found the perfect way to solve the problem by:
df=spark.read.parquet(*folderString)

Using Jekyll's Collection relative_directory for organizing pages/collections

I thought that setting the relative_directory (Jekyll Collection Docs) (github PR) property being set would help me keep my files organized without compromising my desired output, but it seems to be ignored/not used for producing files. I don't want my collections to be in the root directory, because I find it confusing to have ~10 collection folders adjacent to _assets, _data, _includes, _layouts, and others.
Fixes or alternative solutions are welcomed, as long as long as the output is the same, and my pages are in their own directory, without needing to put permalink front-matter on every single page.
_config.yaml
collections:
root:
relative_directory: '_pages/root'
output: true
permalink: /:path.html
root-worthy:
relative_directory: '_pages/root-worthy'
output: true
permalink: /:path.html
docs:
relative_directory: '_pages/docs'
output: true
permalink: /docs/:path.html
Directory Structure:
├── ...
├── _layouts
├── _pages
│   ├── root
│   │ ├── about.html
│   │ └── contact.html
│   ├── root_worthy
│   │ ├── quickstart.html
│   │ └── seo-worthy-page.html
│   └── docs
│   ├── errors.html
│   └── api.html
├── _posts
└── index.html
Desired output:
├── ...
├── _site
│   ├── about.html
│   ├── contact.html
│   ├── quickstart.html
│   ├── seo-worthy-page.html
│   └── docs
│   ├── errors.html
│   └── api.html
└── ...
It seems that the PR you mention is still not merged.
For 3.1.6 and next 3.2, jekyll code is still :
#relative_directory ||= "_#{label}"
But the requester made a plugin that looks like this :
_plugins/collection_relative_directory.rb
module Jekyll
class Collection
def relative_directory
#relative_directory ||= (metadata['relative_directory'] && site.in_source_dir(metadata['relative_directory']) || "_#{label}")
end
end
end

Aurelia: issue configuring bundled asset paths

I'm starting to build a front-end in Aurelia that is integrated into a Clojure project. I checked out some example projects and while I did find them useful for an overview, I've noticed that a lot of the out-of-the-box configurations assume the project is at root (which is understandable).
When I try to integrate this into my project and change the paths accordingly, the bundle is looking for resources in the wrong place - with a path that matches their location in the project, but not within the server's public directory.
For example, I have configured the application to bundle files that are in ./resources/public/, which is where the built files are located (I'm using several pre-processors), and these files are bundled correctly; however, when I load the page, I get the following error in my JS console:
system.src.js:4597 GET https://localhost:8443/resources/public/dist/aurelia.js 404 (Not Found)
The correct path is localhost:8443/dist/aurelia.js - I have a feeling that the /resources/public is coming from some configuration files, but if I change those the bundling breaks.
Relevant paths in the project are (truncated for brevity):
MyProject
├── gulp
│   ├── bundles.js
│   ├── paths.js
│   └── tasks
│   ├── build.js
│   ├── bundle.js
├── gulpfile.js
├── package.json
├── resources
│   ├── public
│   │   ├── config.js
│   │   ├── css
│   │   ├── dist
│   │   │   ├── app-build.js
│   │   │   └── aurelia.js
│   │   ├── fonts
│   │   ├── html
│   │   ├── img
│   │   ├── index.html
│   │   ├── js
│   │   │   ├── app.js
│   │   └── jspm_packages
│ │ ├── system.js
│ │ ├── github
│ │ └── npm
│   └── src
│   ├── fonts
│   ├── img
│   ├── js
│   ├── pug
│   └── stylus
Here are some of the pertinent configurations, trimmed for brevity:
./config.js
baseURL: "/",
...
paths: {
"*": "resources/public/*",
"github:*": "jspm_packages/github/*",
"npm:*": "jspm_packages/npm/*"
}
Note that if I change the path for "*" (or remove it entirely), I get the following error when running gulp build:
Error on dependency parsing for npm:jquery#2.2.4/src/intro.js at
file:///Users/jszpila/Work/MyProject/resources/public/jspm_packages/npm/jquery#2.2.4/src/intro.js
MultipleErrors:
compiler-parse-input:47:1: Unexpected token End of File
compiler-parse-input:47:1: Unexpected token End of File
compiler-parse-input:47:1: Unexpected token End of File
./package.json
"jspm": {
"directories": {
"baseURL": "resources/public"
},
"devDependencies": {
"aurelia-animator-css": "npm:aurelia-animator-css#^1.0.0-beta.1.1.2",
"aurelia-bootstrapper": "npm:aurelia-bootstrapper#^1.0.0-beta.1.1.4",
"aurelia-fetch-client": "npm:aurelia-fetch-client#^1.0.0-beta.1.1.1",
...
}
},
./gulp/bundles.js
"bundles": {
"dist/app-build": {
"includes": [
"[**/*.js]",
"**/*.html!text",
"**/*.css!text"
],
"options": {
"inject": true,
"minify": true,
"depCache": true,
"rev": false
}
},
"dist/aurelia": {
"includes": [
...
"fetch",
"jquery"
],
"options": {
"inject": true,
"minify": true,
"depCache": false,
"rev": false
}
}
}
./gulp/tasks/bundle.js
var config = {
force: true,
baseURL: './resources/public/',
configPath: './resources/public/config.js',
bundles: bundles.bundles
};
So I think it's safe to assume that this incorrect path is coming from one of those configurations; however, they are correct for the tooling - just not the bundle. Can I configure the bundling tool paths and the application tool paths separately, or am I overlooking a misconfiguration?
Thanks in advance for any help!

Vagrant chef_solo provisioner raising NoMethodError - undefined method 'default_attributes' for #<Hash: ....>

I'm trying to provision a VM using Vagrant in order to test a cookbook (chef-mycookbooks-test) I created. It depends on data_bags in chef-repo. I can't find any answers as to why this isn't working.
The VM starts up fine but it throws an exception when it runs chef-solo:
INFO: Setting the run_list to ["recipe[mycookbooks-test::default]"] from CLI options
DEBUG: Applying attributes from json file
Error expanding the run_list:
NoMethodError: undefined method `default_attributes' for #<Hash:0x0000000384e3a7>
I don't know if it matters, but the environment I'm using (local.json) has default_attributes defined. When I log on to the VM and look at solo.rb I see some of the data from my provisioning (eg. environment "local") but not all of it (eg. no role set, default cookbook_path, role_path is []). I get the same error when I try to run chef-solo from the VM manually.
Directory structure on host machine:
.
├── chef-repo
│ ├── data_bags
│ │   ├── groups
│ │   │   └── test.json
│ │   └── users
│ │   └── test.json
│ ├── environments
│ │   └── local.json
│ └── roles
│ └── test_server.json
└── chef-mycookbooks-test
├── Berksfile
├── Berksfile.lock
├── Vagrantfile
├── files
│   └── default
│   └── sudoers.test
├── metadata.rb
└── recipes
└── default.rb
Vagrantfile:
Vagrant.configure("2") do |config|
config.omnibus.chef_version = :latest
config.vm.box = "hansode/centos-6.3-x86_64"
config.berkshelf.enabled = true
config.vm.provision :chef_solo do |chef|
chef.cookbooks_path = ".."
chef.data_bags_path = "../chef-repo/data_bags"
chef.environments_path = "../chef-repo/environments"
chef.environment = "local"
chef.roles_path = "../chef-repo/roles"
chef.add_role("test_server")
chef.run_list = [
"recipe[mycookbooks-test::default]"
]
end
end
Interestingly, if I remove the line defining chef.environment, the run_list expands just fine to [mycookbooks-test::default], but chef-solo fails later when it tries to run some of the dependencies, for which the environment data is needed.
Any help at all would be greatly appreciated. I am completely out of ideas.
Note: I cleaned up some of the unnecessary things in the directory tree (eg. README.md) and Vagrantfile (eg. debug level) for clarity.
Things like cookbooks_path are relative to the host machine, not the guest. Are you sure you really have your cookbooks at /? Chances are you are loading a bunch of invalid data and Chef is choking on it. You can also up your logging level via chef.log_level = :debug.