Why and how does Polymer use Bower?, and do I NEED to learn to use Bower to use Polymer?
I was going through the Catalog of components, and all of them seem to have a 'Bower Command'.
Thanks for your help.
Edit: Bower is a package manager just as npm, I do understand that much. What I meant to ask is: It can be argued that npm has a wider user base than Bower some even argue that we should stop using bower altogether like here and here. So, how is it beneficial to Polymer the use of bower when there are other options. Is what Polymer does only achievable through bower?
Bower just like npm is a package manager. Here you can see the difference between the two.
No, you don't need to use bower to use Polymer, but without that you'll have to manually download each components that you need, place it at the location from where you can refer it and keep track of newer versions of each package that you have used.
In case you are creating custom elements to publish the situation get even worse as you'll have to pass a file along with your project listing all the dependency and the user will have to manually download each dependency listed in your project and then will have to make sure that he has all the dependencies that were required by your dependencies and so on.
This will make process of custom elements or modules in general very hard to use. That's why such projects use some package management tool.
Edit: since the original question has been edited to ask more about why, the short answer is Bower's focus was for web dependencies, so it results in a flat dependency tree. With Bower now deprecated, the Polymer team's recommendation is to use Yarn with the --flat option. That will also result in a flat dependency structure without multiple versions of the same dependency, which is critical to web development, and something NPM has stated they will never offer.
You should be seeing Components move from Bower to Yarn more, especially after Polymer 3 is released. For more information than you'd ever want about this topic, check out this discussion: https://github.com/package-community/discussions/issues/2
Related
Using clojurescript 1.10.758 and reagent 1.0.0, I am running into an error in which a file index.js tries to reference $jscomp, which is not defined.
I've seen a number of Stackoverflow and Github issues related to $jscomp being undefined in the context of shadow-cljs, but I'm not using that.
The problem occurs when I use a development mode build with figwheel (using Leiningen with cljsbuild and the figwheel plugin), and also occurs if I use cljsbuild for a once-only development build. Strangely, if I use webpack to create a bundle, the problem does not occur.
Before I tried to make webpack work, I did have working code without webpack. Something I changed seems to have affected the non-bundled build. The only change I can thing of was to install react and react-dom using npm, and exclude those packages from reagent in Leiningen's dependencies. But undoing the exclusion didn't make the non-bundled code work again.
Any suggestions for how to cause $jscomp to be defined when it's first needed?
$jscomp is related to the Closure Compiler and the Polyfills it creates.
It might be enough to tweak the :language-out :es6 compiler options which is somewhat similar to the :output-feature-set option used by shadow-cljs. The best way to debug this is finding the actual code that is getting polyfilled and why. Might require digging through some compiled JS though.
shadow-cljs uses the Closure Compiler more extensively than regular CLJS or figwheel but they also use it. Solutions that apply to shadow-cljs pretty much apply to other tools as well. Just the settings may work a little differently.
Recently Polymer updated from 0.5.2 to 0.5.3. Some of these changes affect the styling of my components, e.g.:
paper-checkbox
Updated paper-checkbox to match Material Design guidelines
To style properly, must now set border-color along with background-color
My bower include targets a specific version:
"paper-elements": "Polymer/paper-elements#0.5.2"
but bower.json in paper-elements uses the carat:
"paper-checkbox": "Polymer/paper-checkbox#^0.5.0"
so when I run bower update it happily fetches paper-checkbox version 0.5.4.
Is there a way to ensure bower grabs a specific version of these dependencies, without having to list every single package in my own bower.json? E.g., I could explicitly specify paper-checkbox#0.5.2, but because paper-checkbox has its own dependencies using the carat syntax, I'd have to include all dependencies of all the elements I use recursively.
Am I just supposed to immediately update my code whenever a new Polymer minor version is released?
I guess the answer is yes, I do need to explicitly list all dependencies in the entire dependency graph if I want to ensure bower pulls down exact versions of those packages. Oh well!
What is the best way to create a custom OpenShift cartridge?
Looking at documentation and examples, I am seeing a lot of old-school compiling from source installation of the component that the cartridge needs to run.
Some examples https://www.openshift.com/blogs/lightweight-http-serving-using-nginx-on-openshift https://github.com/boekkooi/openshift-diy-nginx-php/blob/master/.openshift/action_hooks/build_nginx https://github.com/razorinc/redis-openshift-example/blob/master/.openshift/action_hooks/build and a ton of others are compiling from source..
I need to create some custom cartridges on my project, but doing it this way feels wrong.
Is there any reason I cant use yum and puppet/augeas to do the building, instead of curl, make and sed?
Or is this the best practice? In that case, why are we doing this 2000 style?
I'll do my best to explain this the best way I can. Feel free to let me know If I need to explain anything in more detail.
I'm assuming you're creating a custom binary cartridge (ie. you're creating a language cartridge such as ruby, python, etc.). Since none of the nodes have that binary installed on the system the custom cartridge you're creating will need to provide that binary and its libraries.
When you install a package with yum its going to install items in several different directories (/etc, /usr/, /var, etc). Since you're creating cartridge that will be copied over to several nodes you'll need to package all these items in a way that can be copied over to a node and then be executed without having to install them to the system.
As for doc's, I would suggest taking a look at these:
https://www.openshift.com/developers/download-cartridges
https://www.openshift.com/blogs/new-openshift-cartridge-format-part-1
https://www.openshift.com/blogs/new-openshift-cartridge-format-part-2
The old monolithic clojure.contrib was available as a .jar from the same place you got the clojure .jar, and you used it by pointing your classpath at it. As far as I can tell, the new modular contribs aren't available in the clojure .jar -- instead, they exist as source files on github. What's the expected way for you to use them? Say, e.g., I wanted to use something in clojure.math.numeric-tower. What would I do?
I've found How do I install Clojure 1.3 with contribs on RHEL 6.1 / JDK7?, but the only answer ("use leiningen") isn't detailed enough for me to figure out. (Searching clojars for numeric-tower yields... nothing.)
You install a contrib module by adding its info to :dependencies in your project.clj file. The next time you run lein for something, it notices your change and automatically grabs the library for you.
I have some more detailed instructions written up here.
As stated in Maven Settings and Repositories the repository where all clojure artifacts are deployed is Sonatype OSS Nexus. If you don't want to go the leiningen or maven way, which I would still advise you to consider also for one-off experiments, you can still download manually all the artifacts from that repository. Specifically, here's all the uploaded versions of clojure.math.numeric-tower.
I can understand the reluctance to using leiningen though it took me longer to write this sentence than to create a new project.
my usual first stop for this sort of question is http://dev.clojure.org/display/design/Where+Did+Clojure.Contrib+Go
then clicking latest release and get the artifact I'd and version, then add a line to the project.clj's dependencies section like so
[math.numeric-tower "0.0.1"]
If you use Clojure, you should really also be using either Leiningen or Maven to manage your dependencies. I believe these are the only sane ways to stay on yop of a complex dependency graph as your project gets larger and has more complex build requirements.
For example, I use Maven and have the following in my project's pom.xml to include the numeric dependencies:
<dependency>
<groupId>org.clojure</groupId>
<artifactId>math.numeric-tower</artifactId>
<version>0.0.1</version>
</dependency>
All the modular Clojure contrib libraries can be included in the same way.
I'm using Mercurial for personal use and am conteplating it for some distributed projects as an alternative to SVN for various reasons.
I'm getting comfortable with using it for self contained projects and can see various options for sharing however I haven't yet found any guidance on managing common libraries to be included in multiple projects in a similar manner to that provided by externals in subversion.
The most obvious shared lump of code is error handling and reporting - we want this to be pretty much the same in all projects (its fairly well evolved). There is also utility code, control libraries and similar that we find better to have as projects built with each solution than to pull in as compiled classes (not least because it ensures they are kept up to date, continuous integration helps us address breaking changes).
Thoughts (I hate open ended questions, but I want to know what, if anything, others are doing).
Mercurial 1.3 now includes nested repository support, which can be used to express dependencies. The other option is to let your build system handle the download and tracking of dependencies using something like ivy or maven though those are more focused on pulling down compiled code.
The world has changed since I asked that question and the solution I now use is different.
The simple answer is now to use packages (specifically NuGet as I do .NET) to deliver the common code instead of nesting repos and including the projects in a solution.
So I have common code built into NuGet packages by and hosted using TeamCity and where previously I would have an external and include the project/source I would now just reference the package.
Use the Forest Extension it emulates svn externals for HG, to some extent that is.
Subrepository (with good guide) or Guestrepo "to overcome ... limitations" (of subrepos) is today's language-agnostic answer