I Javascript's version of react I can use
this.props
but what can I use to gave props in
:reagent-render
callback?
I am trying to do as done here in Chart.js line 14.
To answer your question, you can access the component and props in reagent-render by doing something like this
(ns chartly.core
(:require
[reagent.ratom :as ra]
[reagent.core :as r]))
(defn data-fn [implicit-this]
;; use implicit-this as needed, which is equivalent to "this"
;; From create-class docs -
;; :component-did-mount (fn [this])
)
(defn chart-render []
(let [comp (r/current-component) ;; Reagent method
r-props (r/props comp) ;; Reagent method
js-props (.-props (clj->js comp))]
(js/console.log "PROPS" r-props) ;;-- etc...
[:div {:style {:width 100}}]))
(def data (ra/atom {})) ;;--> see more info on reagent-atom
(defn chart-component []
(r/create-class
{:component-did-mount data-fn
:display-name "chart-component"
:reagent-render chart-render}))
To use -
[chart-component]
However, this is anti-pattern and will be quite difficult to manage, since you are trying to update data prop internally with component-did-mount, which, on completion, would need to manually signal the React component to update itself.
One of the features of Reagent is that is offers to detect changes and updating the component, usually using atom. See Managing State in Reagent for more info.
What you are looking to do is exactly what the Re-frame Framework is helping to manage. You set-up a data-store, and when the store changes, it signals to subscribers (React elements) to update accordingly, and you don't have to handle the signal changes yourself.
There are some edge cases where tapping into the lifecyle events are necessary, especially with charts and other drawing libraries, but I might recommend re-visiting if you find reagent atoms and/or the re-frame library insufficient for your needs. Hope this helps.
As far as I see, you accept some Hiccup data from a user as a string, right? And then try to evaluate it into user namespace, where only reagent library is loaded?
First, the more you build your further code to evaluate, the more difficult to understand it becomes. You could use something like this:
(binding [*ns* user-ns] (eval (read-string user-data)))
Also, to prevent wrong input, it would be better to validate user's input either with Schema or clojure.spac libraries. Since read-string returns a data structure, it might be checked with those two as well. So you would see an error before starting to evaluate something.
Related
I am trying to use AWS Amplify Authentication lib in a re-frame app.
The lib provides a higher order component withAuthenticator which is supposed to wrap the main view of your app. I am trying to use reactify-component and adapt-react-class but unfortunately I get the following error:
Uncaught TypeError: Failed to construct ‘HTMLElement’: Please use the ‘new’ operator, this DOM object constructor cannot be called as a function.
(defn main-panel []
[:div
[:h1 "Hello" ]])
(def root-view
(reagent/adapt-react-class
(withAuthenticator
(reagent/reactify-component main-panel))))
(defn ^:dev/after-load mount-root []
(re-frame/clear-subscription-cache!)
(aws-config/configure)
(re-frame/dispatch-sync [::events/initialize-db])
(reagent/render [root-view]
(.getElementById js/document "app")))
Any help is appreciated
I had this issue with reagent + amplify.
Solved it with 2 changes, but I'm unsure if both are needed
#1 Change output of the google closure compiler to es6 (or higher). Amplify seems to use es6 features that Cannot be polyfilled.
This is for shadow-cljs (shadow-cljs.edn), but this should be possible for other build systems as well.
{:builds {:app {:compiler-options {:output-feature-set :es6}}}}
Disclaimer: I switched to shadow-cljs from lein-cljsbuild since I could not get lein-cljsbuild to respect the configuration for es6 output.
#2 Use functional components.
In reagent 1.0.0-alpha1 and up you can change the compiler to produce functional components by default.
(ns core
(:require
[reagent.core :as r]))
(def functional-compiler (r/create-compiler {:function-components true}))
(r/set-default-compiler! functional-compiler)
There are other ways to make functional components. Check the documentation if you don't like, or can't use, this approach.
I am going through this OM tutorial but it's not clear to me when to use OM components vs plain functions (in particular the om/component macro).
The tutorial writes:
The first argument is a function that takes the application state data
and the backing React component, here called owner. This function must
return an Om component - i.e. a model of the om/IRender interface,
like om.core/component macro generates
; here the function (fn [app owner]) indeed returns an OM component
(om/root
(fn [app owner]
(om/component (dom/h2 nil (:text app))))
app-state
{:target (. js/document (getElementById "app"))})
In the next section we find the following example of a rendering loop for a list:
; this one does not return an om component (or does it?). it returns a virtual dom
(om/root
(fn [app owner]
(apply dom/ul nil
(map (fn [text] (dom/li nil text)) (:list app))))
app-state
{:target (. js/document (getElementById "app0"))})
Here, we're basically just returning a (virtual) dom directly, not wrapped in an OM component, so the question would be: Why does the om/component macro exist? The macro simply helps us to reify the IRender function, but it appears that we can also just use plain functions for that. I would reify OM components that have lifecycle state (or need the owner to call get-props) but for components that just need to create virtual dom I'd rather go for simple functions (so I don't need the build/build-all functions to create my virtual dom). What am I missing here? Why is the macro still useful (and I don't see it).
I had this same question last week, and I dug through the Om source code to find out.
I couldn't find any functional difference between using the om/component macro and not. But maybe this info can shed some light on someone who knows more about React.
Any function f passed to om/root (and subsequently om/build) is placed inside of a container Om component. This Om component is just a dummy React component that forwards all life-cycle events to the result of f if it implements Om's lifecycle protocols (i.e. when it is a reify object).
If the result of f is not a reify object that implements those protocols, it is assumed to be a React component, and it is used as the value returned by the render lifecycle function.
(relevant: Om's render function here)
The om/component macro is just a shorthand for the defn and reify combination when you do not need to pass in state
What is an idiomatic way to handle application configuration in clojure?
So far I use this environment:
;; config.clj
{:k1 "v1"
:k2 2}
;; core.clj
(defn config []
(let [content (slurp "config.clj")]
(binding [*read-eval* false]
(read-string content))))
(defn -main []
(let [config (config)]
...))
Which has many downside:
The path to config.clj might not always be resolved correctly
No clear way to structure config sections for used libraries/frameworks
Not globally accessible (#app/config) (which of course, can be seen as a good functional style way, but makes access to config across source file tedious.
Bigger open-source projects like storm seem to use YAML instead of Clojure and make the config accessible globally via a bit ugly hack: (eval ``(def ~(symbol new-name) (. Config ~(symbol name)))).
First use clojure.edn and in particular clojure.edn/read. E. g.
(use '(clojure.java [io :as io]))
(defn from-edn
[fname]
(with-open [rdr (-> (io/resource fname)
io/reader
java.io.PushbackReader.)]
(clojure.edn/read rdr)))
Regarding the path of config.edn using io/resource is only one way to deal with this. Since you probably want to save an altered config.edn during runtime, you may want to rely on the fact that the path for file readers and writers constructed with an unqualified filename like
(io/reader "where-am-i.edn")
defaults to
(System/getProperty "user.dir")
Considering the fact that you may want to change the config during runtime you could implement a pattern like this
(rough sketch)
;; myapp.userconfig
(def default-config {:k1 "v1"
:k2 2})
(def save-config (partial spit "config.edn"))
(def load-config #(from-edn "config.edn")) ;; see from-edn above
(let [cfg-state (atom (load-config))]
(add-watch cfg-state :cfg-state-watch
(fn [_ _ _ new-state]
(save-config new-state)))
(def get-userconfig #(deref cfg-state))
(def alter-userconfig! (partial swap! cfg-state))
(def reset-userconfig! #(reset! cfg-state default-config)))
Basically this code wraps an atom that is not global and provides set and get access to it. You can read its current state and alter it like atoms with sth. like (alter-userconfig! assoc :k2 3). For global testing, you can reset! the userconfig and also inject various userconfigs into your application (alter-userconfig! (constantly {:k1 300, :k2 212})).
Functions that need userconfig can be written like
(defn do-sth [cfg arg1 arg2 arg3]
...)
And be tested with various configs like default-userconfig, testconfig1,2,3...
Functions that manipulate the userconfig like in a user-panel would use the get/alter..! functions.
Also the above let wraps a watch on the userconfig that automatically updates the .edn file every time userconfig is changed. If you don't want to do this, you could add a save-userconfig! function that spits the atoms content into config.edn. However, you may want to create a way to add more watches to the atom (like re-rendering the GUI after a custom font-size has been changed) which in my opinion would break the mold of the above pattern.
Instead, if you were dealing with a larger application, a better approach would be to define a protocol (with similar functions like in the let block) for userconfig and implement it with various constructors for a file, a database, atom (or whatever you need for testing/different use-scenarios) utilizing reify or defrecord. An instance of this could be passed around in the application and every state-manipulating/io function should use it instead of anything global.
I wouldn't bother even keeping the configuration maps as resources in a separate file (for each environment). Confijulate (https://github.com/bbbates/confijulate , yes - this is a personal project) lets you define all your configuration for each environment within a single namespace, and switch between them via system properties. But if you need to change values on the fly without rebuilding, Confijulate will allow you do that, too.
I've done a fair bit of this over the past month for work. For the cases where passing a config around is not acceptable, then we've used a global config map in an atom. Early on in the application start up, the config var is swap!ed with the loaded config and after that it is left alone. This works in practice because it is effectively immutable for the life of the application. This approach may not work well for libraries, though.
I'm not sure what you mean by "No clear way to structure config sections for used libraries/frameworks". Do you want libraries to have access to the config? Regardless, I created a pipeline of config loaders that is given to the function that setups the config at start up. This allows me to separate config based on library and source.
I'm integrating some ClojureScript code with a JS library call that takes a callback function. The JS library passes data to the callback using JavsScript's "this" keyword.
I can get it to work using (js* "this"). For example:
(libraryCall (fn [] (.log console (js* "this"))))
Is there a way to get at the "this" context from ClojureScript without resorting to js*?
Use the built-in this-as macro. It takes a name and a body, and evaluates the body with the name bound to JavaScript this.
e.g.
(libraryCall (fn [] (this-as my-this (.log js/console my-this))))
Great question... had to dig into the compiler code to find it, it's not well advertised at all.
I'll add it to the book.
Let's say I want to build a large clojure library with several components. As a developer, I would like to keep many of the components in separate namespaces since many helper functions can have similar names. I don't necessarily want to make things private since they may have utility outside in extreme cases and the work-arounds behind private is not good. (In other words, I would like to suggest code usage, not completely prevent usage.)
However, I would like the users of the library to operate in a namespace with a union of a subset of many of the functions in each sub library. What is the idiomatic or best way to do this? One solution that comes to my mind is to write a macro that generates :requires and creates a new var mapping by def'ing a given list of var names (see first code example). Are there tradeoffs with this method such as what happens to extending types? Is there a better way (or builtin)?
Macro example (src/mylib/public.clj):
(ns mylib.public
(:require [mylib.a :as a])
(:require [mylib.b :as b]))
(transfer-to-ns [+ a/+
- b/-
cat b/cat
mapper a/mapper])
Again, to clarify, the end goal would be to have some file in other projects created by users of mylib to be able to make something like (src/someproject/core.clj):
(ns someproject.core
(:require [mylib.public :as mylib]))
(mylib/mapper 'foo 'bar)
#Jeremy Wall, note that your proposed solution does not fullfill my needs. Lets assume the following code exists.
mylib/a.clj:
(ns mylib.a)
(defn fa [] :a)
mylib/b.clj:
(ns mylib.b)
(defn fb [] :b)
mylib/public.clj:
(ns mylib.public
(:use [mylib.a :only [fa]])
(:use [mylib.b :only [fb]]))
somerandomproject/core.clj: (Assume classpaths are set correctly)
(ns somerandomproject.core
(:require [mylib.public :as p])
;; somerandomproject.core=> (p/fa)
;; CompilerException java.lang.RuntimeException: No such var: p/fa, compiling: (NO_SOURCE_PATH:3)
;; somerandomproject.core=> (mylib.a/fa)
;; :a
If you notice, "using" functions in mylib/public.clj DOES NOT allow public.clj to PROVIDE these vars to the user file somerandomproject/core.clj.
You might find it interesting to look at how a library like Compojure or Lamina organizes its "public" api.
Lamina has "public" namespaces like lamina.api that serve to alias (using Zach's import-fn from his potemkin library) functions from the "internal" namespaces like lamina.core.pipeline. With a bit of docs, this serves to clearly delineate the public-facing ns's from the likely-to-change internals. I've found the major drawback to this strategy is that the import-fn makes it much more difficult to walk (in emacs) from the use of a function into it's implementation. Or to know which function to use clojure.repl/source on for example.
A library like Compojure uses private vars to separate public/private parts of the functions (see compojure.core for example). The major drawback with "private" functions is that you may later find you want to expose them or that it makes testing more complicated. If you control the code base, I don't find the private aspects to be a big deal. The testing issue is easily worked around by using #'foo.core/my-function to refer to the private function.
Generally I tend to use something more like Compojure's style than Lamina's but it sounds like you'd prefer something more like Lamina.
I'm not exactly sure what your asking here. I think maybe you want to know what the best practice for importing publics from a namespace for shared utility functions? In that case the refer function is what you are looking for I think: http://clojure.github.com/clojure/clojure.core-api.html#clojure.core/refer
(refer mylib.a :only [+])
(refer mylib.b :only [-])
It imports the public items in a namespace into the current namespace. However the preferred method would be to do this in your namespace declaration with a :use directive
(ns (:use (mylib.a :only [+])
(mylib.b :only [-])))