OpenShift V3 and incremental builds - openshift

I have some issues using incremental builds with the image ruby-22-centos7.
I added the following script "save-artifacts" to .sti/bin directory :
#!/bin/sh -e
pushd ${HOME} >/dev/null
if [ -d ./bundle/ruby ]; then
tar cf - bundle/ruby
fi
popd >/dev/null
I have this error during the build steps :
I0330 13:53:05.022524 1 sti.go:213] Using assemble from image:///usr/libexec/s2i
15 I0330 13:53:05.022544 1 sti.go:213] Using run from image:///usr/libexec/s2i
16 I0330 13:53:05.022551 1 sti.go:213] Using save-artifacts from upload/src/.sti/bin
17 I0330 13:53:05.024552 1 sti.go:142] Existing image for tag 172.30.22.77:5000/blog/blog:latest detected for incremental build
18 I0330 13:53:05.024570 1 sti.go:147] Performing source build from file:///tmp/s2i-build462497527/upload/src
19 I0330 13:53:05.024654 1 sti.go:350] Saving build artifacts from image 172.30.22.77:5000/blog/blog:latest to path /tmp/s2i-build462497527/upload/artifacts
20 I0330 13:53:05.026788 1 docker.go:374] Both scripts and untarred source will be placed in '/tmp'
21 I0330 13:53:05.026820 1 docker.go:510] Creating container using config: {Hostname: Domainname: User: Memory:0 MemorySwap:0 CPUShares:0 CPUSet: AttachStdin:false AttachStdout:true AttachStderr:false PortSpecs:[] ExposedPorts:map[] Tty:false OpenStdin:false StdinOnce:false Env:[] Cmd:[/tmp/scripts/save-artifacts] DNS:[] Image:172.30.22.77:5000/blog/blog:latest Volumes:map[] VolumeDriver: VolumesFrom: WorkingDir: MacAddress: Entrypoint:[] NetworkDisabled:false SecurityOpts:[] OnBuild:[] Mounts:[] Labels:map[]}
22 I0330 13:53:05.685226 1 docker.go:524] Attaching to container
23 I0330 13:53:05.686542 1 docker.go:530] Starting container
24 E0330 13:53:10.836202 1 tar.go:207] Error reading next tar header: io: read/write on closed pipe
25 W0330 13:53:10.859154 1 sti.go:150] Clean build will be performed because of error saving previous build artifacts
26 I0330 13:53:10.859172 1 sti.go:152] ERROR: timeout waiting for tar stream
Any help would be greatly appreciated !

Related

OpenShift upgrade error 4.11.x -> 4.12.2 Marking Degraded due to: unexpected on-disk state validating against rendered-worker

I'm administrating RHEL OpenShift cluster. I'm upgrading from 4.10.x -> 4.11.x -> 4.12.2
There are 3 masters, and 7 worker nodes.
all 3 masters updated
3 of the 8 workers updated.
Thus far the upgrade is now stuck on worker0 with:
oc logs machine-config-daemon-4bs9x -n openshift-machine-config-operator
< snip >
I0216 21:00:08.555947 3136 daemon.go:1255] Current config: rendered-worker-8ebd95b2c00a22992daf1248ebc5640f
I0216 21:00:08.555986 3136 daemon.go:1256] Desired config: rendered-worker-263c6ea5fafb6f1da35a31749a1180d7
I0216 21:00:08.555992 3136 daemon.go:1258] state: Degraded
I0216 21:00:08.566365 3136 update.go:2089] Running: rpm-ostree cleanup -r
Deployments unchanged.
I0216 21:00:08.647332 3136 update.go:2104] Disk currentConfig rendered-worker-263c6ea5fafb6f1da35a31749a1180d7 overrides node's currentConfig annotation rendered-worker-8ebd95b2c00a22992daf1248ebc5640f
I0216 21:00:08.651201 3136 daemon.go:1564] Validating against pending config rendered-worker-263c6ea5fafb6f1da35a31749a1180d7
E0216 21:00:10.291740 3136 writer.go:200] Marking Degraded due to: unexpected on-disk state validating against rendered-worker-263c6ea5fafb6f1da35a31749a1180d7: expected target osImageURL "quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:f454144d2c32aa6fd99b8c68082f59554751282865dce6a866916d3c75fb02ee", have "quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:73b311468554ffe8bdd0dd51df7dafd7a791a16c3147374cc7b28f0d3d7fcc17" ("b5390d80a8b7f90b0b64f9db3e92848591c967612740716c656c6e88696e0c3f")
I've had this problem before and followed the RedHat solutions to run the following command. But this is now failing.
oc debug node/worker0.xx.com
sh-4.4# chroot /host
sh-4.4# rpm-ostree status
State: idle
Deployments:
* db83d20cf09a263777fcca78594b16da00af8acc245d29cc2a1344abc3f0dac2
Version: 412.86.202301311551-0 (2023-01-31T15:54:05Z)
sh-4.4#
sh-4.4# /run/bin/machine-config-daemon pivot "quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:f454144d2c32aa6fd99b8c68082f59554751282865dce6a8669163c75fb02ee"
I0216 21:02:54.449270 3962714 run.go:19] Running: nice -- ionice -c 3 oc image extract --path /:/run/mco-machine-os-content/os-content-821872843 --registry-config /var/lib/kubelet/config.json quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:f454144d2c32aa6fd99b8c68082f59554751282865dce6a866916d3c75fb02ee
I0216 21:03:48.349962 3962714 rpm-ostree.go:209] Previous pivot: quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:73b311468554ffe8bdd0dd51df7dafd7a791a16c3147374cc7b28f0d3d7fcc17
I0216 21:03:49.926169 3962714 rpm-ostree.go:246] No com.coreos.ostree-commit label found in metadata! Inspecting...
I0216 21:03:49.926234 3962714 rpm-ostree.go:412] Running captured: ostree refs --repo /run/mco-machine-os-content/os-content-821872843/srv/repo
error: error running ostree refs --repo /run/mco-machine-os-content/os-content-821872843/srv/repo: exit status 1
error: opening repo: opendir(/run/mco-machine-os-content/os-content-821872843/srv/repo): No such file or directory
sh-4.4#
After a reboot and retry now I'm getting:
sh-4.4# /run/bin/machine-config-daemon pivot "quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:f454144d2c32aa6fd99b8c68082f59554751282865dce6a866916375fb02ee"
I0217 19:10:06.928154 1443914 run.go:19] Running: nice -- ionice -c 3 oc image extract --path /:/run/mco-machine-os-content/os-content-903744214 --registry-config /var/lib/kubelet/config.json quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:f454144d2c32aa6fd99b8c68082f59554751282865dce6a8669163c75fb02ee
error: "quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:f454144d2c32aa6fd99b8c68082f59554751282865dce6a8669163c75fb02ee" is not a valid image reference: invalid checksum digest length
W0217 19:10:07.176459 1443914 run.go:45] nice failed: running nice -- ionice -c 3 oc image extract --path /:/run/mco-machine-os-content/os-content-903744214 --registry-config /var/lib/kubelet/config.json quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:f454144d2c32aa6fd99b8c68082f59554751282865dce6a8669163c75fb02ee failed: error: "quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:f454144d2c32aa6fd99b8c68082f59554751282865dce6a8669163c75fb02ee" is not a valid image reference: invalid checksum digest length
: exit status 1; retrying...
^C
I tried this:
/run/bin/machine-config-daemon pivot "quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:f454144d2c32aa6fd99b8c68082f59554751282865dce6a866916375fb02ee"
expecting this result ( from a previous upgrade problem ):
sh-4.4# chroot /host
sh-4.4# /run/bin/machine-config-daemon pivot "quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:73b311468554ffe8bdd0dd51df7dafd7a791a16c3147374cc7b28f0d3d7fcc17"
I0208 21:50:00.408235 2962835 run.go:19] Running: nice -- ionice -c 3 oc image extract --path /:/run/mco-machine-os-content/os-content-3432684387 --registry-config /var/lib/kubelet/config.json quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:73b311468554ffe8bdd0dd51df7dafd7a791a16c3147374cc7b28f0d3d7fcc17
I0208 21:50:29.727695 2962835 rpm-ostree.go:353] Running captured: rpm-ostree status --json
I0208 21:50:29.780350 2962835 rpm-ostree.go:261] Previous pivot: quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:7c252d64354d207cd7fb2a6e2404e611a29bf214f63a97345dee1846055c15d8
I0208 21:50:31.456928 2962835 rpm-ostree.go:293] Pivoting to: 411.86.202301242231-0 (b5390d80a8b7f90b0b64f9db3e92848591c967612740716c656c6e88696e0c3f)
I0208 21:50:31.456966 2962835 rpm-ostree.go:325] Executing rebase from repo path /run/mco-machine-os-content/os-content-3432684387/srv/repo with customImageURL pivot://quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:73b311468554ffe8bdd0dd51df7dafd7a791a16c3147374cc7b28f0d3d7fcc17 and checksum b5390d80a8b7f90b0b64f9db3e92848591c967612740716c656c6e88696e0c3f
I0208 21:50:31.457048 2962835 update.go:1972] Running: rpm-ostree rebase --experimental /run/mco-machine-os-content/os-content-3432684387/srv/repo:b5390d80a8b7f90b0b64f9db3e92848591c967612740716c656c6e88696e0c3f --custom-origin-url pivot://quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:73b311468554ffe8bdd0dd51df7dafd7a791a16c3147374cc7b28f0d3d7fcc17 --custom-origin-description Managed by machine-config-operator
0 metadata, 0 content objects imported; 0 bytes content written
Staging deployment... done
Upgraded:
NetworkManager 1:1.30.0-16.el8_4 -> 1:1.36.0-12.el8_6
< snip>
zlib 1.2.11-18.el8_4 -> 1.2.11-19.el8_6
Removed:
ModemManager-glib-1.10.8-2.el8.x86_64
libmbim-1.20.2-1.el8.x86_64
libqmi-1.24.0-1.el8.x86_64
openvswitch2.16-2.16.0-108.el8fdp.x86_64
redhat-release-coreos-410.84-2.el8.x86_64
Added:
WALinuxAgent-udev-2.3.0.2-2.el8_6.3.noarch
glibc-gconv-extra-2.28-189.5.el8_6.x86_64
libbpf-0.4.0-3.el8.x86_64
openvswitch2.17-2.17.0-67.el8fdp.x86_64
redhat-release-8.6-0.1.el8.x86_64
redhat-release-eula-8.6-0.1.el8.x86_64
shadow-utils-subid-2:4.6-16.el8.x86_64
Run "systemctl reboot" to start a reboot
sh-4.4# systemctl reboot

Unable to build podman compatiable containers using nix-build and dockerTools.buildImage

The following is invidious.nix, which builds a container that contains nix packages for Bash, Busybox and Invidious:
let
# nixos-22.05 / https://status.nixos.org/
pkgs = import (fetchTarball "https://github.com/NixOS/nixpkgs/archive/d86a4619b7e80bddb6c01bc01a954f368c56d1df.tar.gz") {};
in rec {
docker = pkgs.dockerTools.buildImage {
name = "invidious";
contents = [ pkgs.busybox pkgs.bash pkgs.invidious ];
config = {
Cmd = [ "/bin/bash" ];
Env = [];
Volumes = {};
};
};
}
If I try to load the container with docker load < result, Docker can correctly load the container.
docker load < result
14508d34fd29: Loading layer [==================================================>] 156.6MB/156.6MB
Loaded image: invidious:2nrcdxgz46isccfgyzdcbirs0vvqhp55
However, if I attempt the same thing using podman, I get the following error:
podman load < result
Error: payload does not match any of the supported image formats:
* oci: initializing source oci:/var/tmp/podman3824611648:: open /var/tmp/podman3824611648/index.json: not a directory
* oci-archive: loading index: open /var/tmp/oci1927542201/index.json: no such file or directory
* docker-archive: loading tar component manifest.json: archive/tar: invalid tar header
* dir: open /var/tmp/podman3824611648/manifest.json: not a directory
If I inspect the result, it does appear to have the correct format for an OCI container:
tar tvfz result
dr-xr-xr-x root/root 0 1979-12-31 19:00 ./
-r--r--r-- root/root 391 1979-12-31 19:00 027302622543ef251be6d3f2d616f98c73399d8cd074b0d1497e5a7da5e6c882.json
dr-xr-xr-x root/root 0 1979-12-31 19:00 669db3729b40e36a9153569b747788611e547f0b50a9f7d77107a04c6ddd887e/
-r--r--r-- root/root 3 1979-12-31 19:00 669db3729b40e36a9153569b747788611e547f0b50a9f7d77107a04c6ddd887e/VERSION
-r--r--r-- root/root 353 1979-12-31 19:00 669db3729b40e36a9153569b747788611e547f0b50a9f7d77107a04c6ddd887e/json
-r--r--r-- root/root 156579840 1979-12-31 19:00 669db3729b40e36a9153569b747788611e547f0b50a9f7d77107a04c6ddd887e/layer.tar
-r--r--r-- root/root 280 1979-12-31 19:00 manifest.json
-r--r--r-- root/root 128 1979-12-31 19:00 repositories
How do I get nix-build to create compliant containers that podman can read?
nix-build version: 2.10.3
podman version: 4.2.0
It turns out, the version of podman I'm running can't read gzipped tar files. The following works:
zcat result | podman load

JMeter CLI report generation fails - org.apache.jmeter.report.dashboard.GenerationException: Data exporter "json"

JMeter 5.3
I use the CLI as follows:
C:\Users\guyl\OneDrive - xxxxLTD\Guy\apache-jmeter-5.3\bin>jmeter -n -t "C:\Users\guyl\OneDrive - xxxxLTD\Guy\JMeter\DCS DB threaded test.jmx" -l"C:\Users\guyl\OneDrive - xxxxLTD\Guy\JMeter\db_report.csv" -Jthreads=5 -Jloops=100 -e -o"C:\Users\guyl\OneDrive - xxxxLTD\Guy\JMeter\output\19082020\"
Creating summariser <summary>
Created the tree successfully using C:\Users\guyl\OneDrive - xxxLTD\Guy\JMeter\DCS DB threaded test.jmx
Starting standalone test # Wed Aug 26 14:22:56 IDT 2020 (1598440976507)
Waiting for possible Shutdown/StopTestNow/HeapDump/ThreadDump message on port 4445
Generate Summary Results + 91 in 00:00:03 = 32.0/s Avg: 35 Min: 0 Max: 1031 Err: 0 (0.00%) Active: 2 Started: 2 Finished: 0
summary + 91 in 00:00:03 = 32.1/s Avg: 35 Min: 0 Max: 1031 Err: 0 (0.00%) Active: 2 Started: 2 Finished: 0
summary + 104409 in 00:00:29 = 3621.4/s Avg: 1 Min: 0 Max: 448 Err: 50400 (48.27%) Active: 0 Started: 5 Finished: 5
summary = 104500 in 00:00:32 = 3299.8/s Avg: 1 Min: 0 Max: 1031 Err: 50400 (48.23%)
Generate Summary Results + 104409 in 00:00:29 = 3620.9/s Avg: 1 Min: 0 Max: 448 Err: 50400 (48.27%) Active: 0 Started: 5 Finished: 5
Generate Summary Results = 104500 in 00:00:32 = 3299.1/s Avg: 1 Min: 0 Max: 1031 Err: 50400 (48.23%)
Tidying up ... # Wed Aug 26 14:23:28 IDT 2020 (1598441008843)
Error generating the report: org.apache.jmeter.report.dashboard.GenerationException: Data exporter "json" is unable to export data.
... end of run
At the end, you can see we have:
Error generating the report: org.apache.jmeter.report.dashboard.GenerationException: Data exporter "json" is unable to export data.
... end of run
Note that it didn't matter whether the report's extension was CSV or JTL.
I was able to generate the JTL report, and then run jmeter -g <my JTL file>, but I'd like the -e option to work.
Update: Now I get errors with the g option:
C:\Users\guyl\OneDrive - xxxLTD\Guy\apache-jmeter-5.3\bin>jmeter -g "C:\Users\guyl\OneDrive - xxxLTD\Guy\JMeter\db_report" -o"C:\Users\guyl\OneDrive - xxxLTD\Guy\JMeter\output\19082020\"
An error occurred: Data exporter "json" is unable to export data.
errorlevel=1
Press any key to continue . . .
Here is what I found in the jmeter.log file:
2020-08-26 14:37:51,812 INFO o.a.j.r.p.AbstractSampleConsumer: class org.apache.jmeter.report.processor.FilterConsumer#stopProducing(): nameFilter produced 1567500 samples
2020-08-26 14:37:51,812 INFO o.a.j.r.p.AbstractSampleConsumer: class org.apache.jmeter.report.processor.FilterConsumer#stopProducing(): dateRangeFilter produced 313500 samples
2020-08-26 14:37:51,812 INFO o.a.j.r.p.AbstractSampleConsumer: class org.apache.jmeter.report.processor.NormalizerSampleConsumer#stopProducing(): normalizer produced 104500 samples
2020-08-26 14:37:51,813 INFO o.a.j.r.p.CsvFileSampleSource: produce(): 104500 samples produced in 6s 70 ms on channel 0
2020-08-26 14:37:51,813 INFO o.a.j.r.d.ReportGenerator: Exporting data using exporter:'json' of className:'org.apache.jmeter.report.dashboard.JsonExporter'
2020-08-26 14:37:51,814 INFO o.a.j.r.d.JsonExporter: Found data for consumer statisticsSummary in context
2020-08-26 14:37:51,814 INFO o.a.j.r.d.JsonExporter: Creating statistics for overall
2020-08-26 14:37:51,815 INFO o.a.j.r.d.JsonExporter: Creating statistics for other transactions
2020-08-26 14:37:51,815 INFO o.a.j.r.d.JsonExporter: Checking output folder
2020-08-26 14:37:51,816 ERROR o.a.j.JMeter: An error occurred:
org.apache.jmeter.report.dashboard.GenerationException: Data exporter "json" is unable to export data.
at org.apache.jmeter.report.dashboard.ReportGenerator.exportData(ReportGenerator.java:385) ~[ApacheJMeter_core.jar:5.3]
at org.apache.jmeter.report.dashboard.ReportGenerator.generate(ReportGenerator.java:258) ~[ApacheJMeter_core.jar:5.3]
at org.apache.jmeter.JMeter.start(JMeter.java:545) [ApacheJMeter_core.jar:5.3]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_241]
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) ~[?:1.8.0_241]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) ~[?:1.8.0_241]
at java.lang.reflect.Method.invoke(Unknown Source) ~[?:1.8.0_241]
at org.apache.jmeter.NewDriver.main(NewDriver.java:252) [ApacheJMeter.jar:5.3]
Caused by: org.apache.jmeter.report.dashboard.ExportException: Error creating output folder C:\Users\guyl\OneDrive - Nayax LTD\Guy\JMeter\output\19082020"
at org.apache.jmeter.report.dashboard.JsonExporter.checkAndGetOutputFolder(JsonExporter.java:112) ~[ApacheJMeter_core.jar:5.3]
at org.apache.jmeter.report.dashboard.JsonExporter.export(JsonExporter.java:77) ~[ApacheJMeter_core.jar:5.3]
at org.apache.jmeter.report.dashboard.ReportGenerator.exportData(ReportGenerator.java:379) ~[ApacheJMeter_core.jar:5.3]
... 7 more
Caused by: java.io.IOException: Unable to create directory C:\Users\guyl\OneDrive - xxxLTD\Guy\JMeter\output\19082020"
at org.apache.commons.io.FileUtils.forceMkdir(FileUtils.java:2491) ~[commons-io-2.6.jar:2.6]
at org.apache.jmeter.report.dashboard.JsonExporter.checkAndGetOutputFolder(JsonExporter.java:110) ~[ApacheJMeter_core.jar:5.3]
at org.apache.jmeter.report.dashboard.JsonExporter.export(JsonExporter.java:77) ~[ApacheJMeter_core.jar:5.3]
at org.apache.jmeter.report.dashboard.ReportGenerator.exportData(ReportGenerator.java:379) ~[ApacheJMeter_core.jar:5.3]
... 7 more
So it all started from:
Caused by: java.io.IOException: Unable to create directory C:\Users\guyl\OneDrive - xxx LTD\Guy\JMeter\output\19082020"
It's supposed to create that folder if it doesn't exist, isn't it?
JMeter will create only 1 level of folder and not the full hierarchy.
Second , try avoiding a folder with spaces in it.
I was getting this error as well and the solution to my problem was that from this command:
jmeter -n -t “location of test file” -l “location of your result file” -e -o “location of reports folder”
The third option "location of reports folder" needed to be a brand new non-existing folder. After telling the command to create a folder, my HTML reports were successful!
I've just experienced the same error message, using jMeter 5.4.1.
In my case, I ended up investigating the problem using ProcMon
Procmon showed that the closing double quote " was included in the folder path, which is an invalid character in windows folder/file names.
I was fortunate enough that my path did not have any spaces in it so removing the double quote did not impact me. Try to use folders without spaces in the names, or alternatively use the 8.3 short names in the paths instead.

How to solve bazel build error in Docker Build?

This error keeps on coming during docker build.
Tried various code techniques.
ERROR: Process exited with status 128: Process exited with status 128
++ git describe --long --tags
+ tf_git_rev=v1.14.0-14-g1aad02a78e
+ echo 'STABLE_TF_GIT_VERSION v1.14.0-14-g1aad02a78e'
+ pushd native_client
++ git describe --long --tags
fatal: No names found, cannot describe anything.
+ ds_git_rev=
STABLE_TF_GIT_VERSION v1.14.0-14-g1aad02a78e
/tensorflow/native_client /tensorflow
INFO: Elapsed time: 150.094s, Critical Path: 6.47s
INFO: 1 process: 1 local.
FAILED: Build did NOT complete successfully
FAILED: Build did NOT complete successfully
The command '/bin/sh -c bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" --config=monolithic --config=cuda -c opt --copt=-O3 --copt="-D_GLIBCXX_USE_CXX11_ABI=0" --copt=-mtune=generic --copt=-march=x86-64 --copt=-msse --copt=-msse2 --copt=-msse3 --copt=-msse4.1 --copt=-msse4.2 --copt=-mavx --copt=-fvisibility=hidden //native_client:libdeepspeech.so //native_client:generate_trie --verbose_failures --action_env=LD_LIBRARY_PATH=${LD_LIBRARY_PATH}' returned a non-zero code: 1
Build should be successful.

lein cljsbuild fails with untraceable error. How do you troubleshoot cljsbuild errors?

I do not see any log file for the compilation and the error in the terminal is insufficient for me to troubleshoot further.
How do i get more verbose error logging or how should i trouble shoot this issue?
First few lines from stacktrace below
Compiling ClojureScript...
Compiling ["resources/public/js/app.js"] from ["src/cljs"]...
Compiling ["resources/public/js/app.js"] failed.
clojure.lang.ExceptionInfo: failed compiling file:resources\public\js\out\cljs\core.cljs {:file #object[java.io.File 0x7c5d1d25 "resources\\public\\js\\out\\cljs\\core.cljs"], :clojure.error/phase :compilation}
at cljs.compiler$compile_file$fn__3901.invoke(compiler.cljc:1706)
at cljs.compiler$compile_file.invokeStatic(compiler.cljc:1666)
I have a simple cljs file with the following contents
(ns moose.core)
(defn run []
(.write js/document "This is not the end!"))
My project.clj has the following config for cljsbuild
:cljsbuild
{:builds [{:id "dev"
:source-paths ["src/cljs"]
:figwheel {:on-jsload "moose.core/run"
:open-urls ["http://localhost:3449/index.html"]}
:jar true
:compiler {:main moose.core
:warnings true
:output-dir "resources/public/js/out"
:asset-path "js/out"
:output-to "resources/public/js/app.js"}}]}
:clean-targets ^{:protect false} [:target-path :compile-path "resources/public/js" "dev-target"]
Update 1
Following Alan's advice below, i created a new template and narrowed down the cause to adding a fairly old library for interacting with CouchDB
[com.ashafa/clutch "0.4.0"]
The question remains how do I get detailed/complete logs for cljsbuild.
Update 2
Turns out the position of the library in the list of dependencies has an impact.
If it appears before [com.cognitect/transit-clj "0.8.313"] compilation fails otherwise it works.
The configuration options in ClojureScript are not well documented. It is easiest to clone an existing (working) project and go from there. I would suggest starting from the cljs-template project as follows (see the README):
git clone https://github.com/cloojure/cljs-template.git demo-0212 ; temp
> cd demo-0212
~/expr/demo-0212 > ls -ldF *
-rwxrwxr-x 1 alan alan 222 Feb 12 16:04 npm-install.bash*
-rwxrwxr-x 1 alan alan 4216 Feb 12 16:04 project.clj*
-rw-rw-r-- 1 alan alan 1576 Feb 12 16:04 README.adoc
drwxrwxr-x 3 alan alan 4096 Feb 12 16:04 resources/
drwxrwxr-x 5 alan alan 4096 Feb 12 16:04 src/
drwxrwxr-x 4 alan alan 4096 Feb 12 16:04 test/
~/expr/demo-0212 > ./npm-install.bash
...<snip>... lots of stuff
At this point your project has the npm stuff needed for the unit tests.
> lein clean
> lein doo phantom test once
;; ======================================================================
;; Testing with Phantom:
doorunner - beginning
doorunner - end
Testing tst.flintstones.dino
test once - enter
globalObject: #js {:a 1, :b 2, :c 3}
(-> % .-b (+ 5) => 7
(js/makeDino) => #js {:desc blue dino-dog, :says #object[Function]}
dino.desc => blue dino-dog
dino.says(5) => Ruff-Ruff-Ruff-Ruff-Ruff!
:keep-words ("am" "having" "today")
:re-seq ("am" "having" "today")
test once - leave
Testing tst.flintstones.wilma
test each - enter
test each - leave
test each - enter
wilmaPhony/stats: #js {:lipstick red, :height 5.5}
wilma => #js {:desc patient housewife, :says #object[Function]}
test each - leave
Testing tst.flintstones.pebbles
test once - enter
test once - leave
Testing tst.flintstones.slate
logr-slate-enter
logr-slate-leave 3
Testing tst.flintstones.bambam
test each - enter
test each - leave
test each - enter
logr-bambam-enter
logr-bambam-leave 3
test each - leave
Ran 9 tests containing 22 assertions.
0 failures, 0 errors.
lein doo phantom test once 38.73s user 1.05s system 313% cpu 12.701 total
You can also fire off figwheel to see results in the browser:
> lein clean
> lein figwheel
see new webpage (30-60 sec delay)
------------------------
Figwheel template
Checkout your developer console.
I am a component!
I have bold and red text.
...etc...
------------------------