The old monolithic clojure.contrib was available as a .jar from the same place you got the clojure .jar, and you used it by pointing your classpath at it. As far as I can tell, the new modular contribs aren't available in the clojure .jar -- instead, they exist as source files on github. What's the expected way for you to use them? Say, e.g., I wanted to use something in clojure.math.numeric-tower. What would I do?
I've found How do I install Clojure 1.3 with contribs on RHEL 6.1 / JDK7?, but the only answer ("use leiningen") isn't detailed enough for me to figure out. (Searching clojars for numeric-tower yields... nothing.)
You install a contrib module by adding its info to :dependencies in your project.clj file. The next time you run lein for something, it notices your change and automatically grabs the library for you.
I have some more detailed instructions written up here.
As stated in Maven Settings and Repositories the repository where all clojure artifacts are deployed is Sonatype OSS Nexus. If you don't want to go the leiningen or maven way, which I would still advise you to consider also for one-off experiments, you can still download manually all the artifacts from that repository. Specifically, here's all the uploaded versions of clojure.math.numeric-tower.
I can understand the reluctance to using leiningen though it took me longer to write this sentence than to create a new project.
my usual first stop for this sort of question is http://dev.clojure.org/display/design/Where+Did+Clojure.Contrib+Go
then clicking latest release and get the artifact I'd and version, then add a line to the project.clj's dependencies section like so
[math.numeric-tower "0.0.1"]
If you use Clojure, you should really also be using either Leiningen or Maven to manage your dependencies. I believe these are the only sane ways to stay on yop of a complex dependency graph as your project gets larger and has more complex build requirements.
For example, I use Maven and have the following in my project's pom.xml to include the numeric dependencies:
<dependency>
<groupId>org.clojure</groupId>
<artifactId>math.numeric-tower</artifactId>
<version>0.0.1</version>
</dependency>
All the modular Clojure contrib libraries can be included in the same way.
Related
I have a Unit-Test Project where i used MVVMCross for Dependency Injection, but since i made a Implementation of the Library which i tested for UWP, i kinda wanted to use the Unit-Test for both Implementations.
For that purpose i used Compiler Switches and ended up removing MVVMCross.
But when i uninstall it from the packages i always get "Resource.Attribute" does not contain a definition for "MvxBind" and some other MVVMCross attributes.
I rebuilded the project. I deleted the obj and bin folder and looked through every file in the project folder, but i cant find a thing why it keeps adding those attributes to my Ressource.Designer.cs
If i add MVVMCross back and just dont use it all, obviously it works. But i kinda want to remove it there, since it is just overkill.
Why and how does Polymer use Bower?, and do I NEED to learn to use Bower to use Polymer?
I was going through the Catalog of components, and all of them seem to have a 'Bower Command'.
Thanks for your help.
Edit: Bower is a package manager just as npm, I do understand that much. What I meant to ask is: It can be argued that npm has a wider user base than Bower some even argue that we should stop using bower altogether like here and here. So, how is it beneficial to Polymer the use of bower when there are other options. Is what Polymer does only achievable through bower?
Bower just like npm is a package manager. Here you can see the difference between the two.
No, you don't need to use bower to use Polymer, but without that you'll have to manually download each components that you need, place it at the location from where you can refer it and keep track of newer versions of each package that you have used.
In case you are creating custom elements to publish the situation get even worse as you'll have to pass a file along with your project listing all the dependency and the user will have to manually download each dependency listed in your project and then will have to make sure that he has all the dependencies that were required by your dependencies and so on.
This will make process of custom elements or modules in general very hard to use. That's why such projects use some package management tool.
Edit: since the original question has been edited to ask more about why, the short answer is Bower's focus was for web dependencies, so it results in a flat dependency tree. With Bower now deprecated, the Polymer team's recommendation is to use Yarn with the --flat option. That will also result in a flat dependency structure without multiple versions of the same dependency, which is critical to web development, and something NPM has stated they will never offer.
You should be seeing Components move from Bower to Yarn more, especially after Polymer 3 is released. For more information than you'd ever want about this topic, check out this discussion: https://github.com/package-community/discussions/issues/2
I have painfully come across the facts that scala.util.parsing and scala.swing are apparently no more bundled in Scala 2.11. Each time, I had to google for the right line to add to an sbt configuration, or to find the right link for where to download the jar file.
In case there are other libraries that moved out, how am I supposed to know these things? Or am I supposed to rely only on questions from people having the same problem on Stackoverflow? The Scala Swing project on github does not even document these info.
I like creating Eclipse projects on the fly, and making them depend on other projects in the same workspace, without going through sbt, and it is annoying to run into these library disappearance cases on every computer/workspace where I do this.
The modularization (what you call externalizing) has been discussed for a good while on the scala-users mailing list. But the canonical place where to find this information is in the release notes. While you may not want to read all of those, I would strongly advise reading at least the release notes for a major version of any language you use. Case in point, the release notes for Scala 2.11.0:
Modularization
The core Scala standard library jar has shed 20% of its
bytecode. The modules for xml, parsing, swing as well as the
(unsupported) continuations plugin and library are available
individually or via scala-library-all. Note that this artifact has
weaker binary compatibility guarantees than scala-library – as
explained above. The compiler has been modularized internally, to
separate the presentation compiler, scaladoc and the REPL. We hope
this will make it easier to contribute. In this release, all of these
modules are still packaged in scala-compiler.jar. We plan to ship them
in separate JARs in 2.12.x.
What is "vendoring" exactly? How would you define this term?
Does it mean the same thing in different programming languages? Conceptually speaking, not looking at the exact implementation.
Based on this answer
Defined here for Go as:
Vendoring is the act of making your own copy of the 3rd party packages
your project is using. Those copies are traditionally placed inside
each project and then saved in the project repository.
The context of this answer is in the Go language, but the concept still applies.
If your app depends on certain third-party code to be available you could declare a dependency and let your build system install the dependency for you.
If however the source of the third-party code is not very stable you could "vendor" that code. You take the third-party code and add it to your application in a more or less isolated way. If you take this isolation seriously you should "release" this code internally to your organization/working environment.
Another reason for vendoring is if you want to use certain third-party code but you want to change it a little bit (a fork in other words). You can copy the code, change it, release it internally and then let your build system install this piece of code.
Vendoring means putting a dependency into you project folder (vs. depending on it globally) AND committing it to the repo.
For example, running cp /usr/local/bin/node ~/yourproject/vendor/node & committing it to the repo would "vendor" the Node.js binary – all devs on the project would use this exact version. This is not commonly done for node itself but e.g. Yarn 2 ("Berry") is used like this (and only like this; they don't even install the binary globally).
The committing act is important. As an example, node_modules are already installed in your project but only committing them makes them "vendored". Almost nobody does that for node_modules but e.g. PnP + Zero Installs of Yarn 2 are actually built around vendoring – you commit .yarn/cache with many ZIP files into the repo.
"Vendoring" inherently brings tradeoffs between repo size (longer clone times, more data transferred, local storage requirements etc.) and reliability / reproducibility of installs.
Summarizing other, (too?) long answers:
Vendoring is hard-coding the often forked version of a dependency.
This typically involves static linking or some other copy but it doesn't have to.
Right or wrong, the term "hard-coding" has an old and bad reputation. So you won't find it near projects openly vendoring, however I can't think of a more accurate term.
As far as I know the term comes from Ruby on Rails.
It describes a convention to keep a snapshot of the full set of dependencies in source control, in directories that contain package name and version number.
The earliest occurrence of vendor as a verb I found is the vendor everything post on err the blog (2007, a bit before the author co-founded GitHub). That post explains the motivation and how to add dependencies. As far as I understand the code and commands, there was no special tool support for calling the directory vendor at that time (patches and code snippets were floating around).
The err blog post links to earlier ones with the same convention, like this fairly minimal way to add vendor subdirectories to the Rails import path (2006).
Earlier articles referenced from the err blog, like this one (2005), seemed to use the lib directory, which didn't make the distinction between own code and untouched snapshots of dependencies.
The goal of vendoring is more reproducibility, better deployment, the kind of things people currently use containers for; as well as better transparency through source control.
Other languages seem to have picked up the concept as is; one related concept is lockfiles, which define the same set of dependencies in a more compact form, involving hashes and remote package repositories. Lockfiles can be used to recreate the vendor directory and detect any alterations. The lockfile concept may have come from the Ruby gems community, but don't quote me on that.
The solution we’ve come up with is to throw every Ruby dependency in vendor. Everything. Savvy? Everyone is always on the same page: we don’t have to worry about who has what version of which gem. (we know) We don’t have to worry about getting everyone to update a gem. (we just do it once) We don’t have to worry about breaking the build with our libraries. […]
The goal here is simple: always get everyone, especially your production environment, on the same page. You don’t want to guess at which gems everyone does and does not have. Right.
There’s another point lurking subtlety in the background: once all your gems are under version control, you can (probably) get your app up and running at any point of its existence without fuss. You can also see, quite easily, which versions of what gems you were using when. A real history.
I've been using c3p0 with hibernate for a couple of years. When looking at exception stack traces, I see classes such as com.mchange.v2.c3p0.impl.NewProxyPreparedStatement in the stack. I went looking for the source code for these classes and came across the curous com.mchange.v2.c3p0.codegen package.
In particular, it looks like JdbcProxyGenerator is metaprogramming in Java. I'm having a hard time understanding the codegen mechanism and why it is used. The built jar contains these generated classes, so I'm assuming these classes are built during the build, perhaps as part of a two-phase build. The codegen package does not appear to be in the generated jar.
Any insight would be appreciated, just for my own curiosity. Thanks!
yes, you are absolutely right.
c3p0 uses code generation to generate non reflective proxy implementations of large JDBC interfaces, "java bean" classes with lots of properties, and some classes containing debug and logging flags (to set up conditional compilation within the build).
You can always see the generated classes by typing ant codegen in the source distribution, and then looking at the build/codebase directory. The latest binary distribution of c3p0 (0.9.2-pre2) includes the generated sources in a src.jar file, which you can also find as a maven artifact at http://repo1.maven.org/maven2/com/mchange/c3p0/0.9.2-pre2-RELEASE/c3p0-0.9.2-pre2-RELEASE-sources.jar
I hope this helps!