I was happily using figwheel all day. I terminated the process by entering :cljs/quit.
When I try to restart figwheel lein figwheel, I'm greeted with this message from leiningen:
'figwheel' is not a task. See 'lein help'
Running lein help lists many tasks I can perform, but figwheel is not among them.
Here's what my project.clj looks like (extra stuff elided):
(defproject myproject
...
:dependencies [...]
:plugins [[lein-environ "1.0.2"]
[lein-cljsbuild "1.1.1"]
[lein-asset-minifier "0.2.4"]]
...
:profiles {:dev {:dependencies [...
[lein-figwheel "0.5.0-6"]
...]
:plugins [[lein-figwheel "0.5.0-6"]
...]
:figwheel {...}}}
...)
Here's what I've tried so far:
Verified I was in the correct directory
Checked out all code changes made since the last successful figwheel start
Added [lein-figwheel "0.5.0-6"] to the base :plugins vector (this sort of worked, but didn't recognize any of my profile-specific settings)
Restarted my computer
You can type lein help profiles to read all about profiles. The problem in this case is caused by:
Remember that if a profile with the same name is specified in multiple
locations, only the profile with the highest "priority" is picked – no
merging is done. The "priority" is – from highest to lowest –
profiles.clj, project.clj, user-wide profiles, and finally system-wide
profiles.
It's using the :dev in profiles.clj which doesn't have figwheel. This is also why adding lein-figwheel to the base :plugins sort of helped, but doesn't use all your settings.
There's a straightforward solution suggested by the docs:
If you need to enable personal overrides of parts of a profile, you can use a composite profile with common and personal parts - something like :dev [:dev-common :dev-overrides]; you would then have just :dev-overrides {} in project.clj and override it in profiles.clj.
Related
I'm checkcing binary strip in RPM Packaging and get this:
__spec_install_post:
...
__os_install_post
...
__os_install_post:
...
%{!?_debug_package: /usr/lib/rpm/redhat/brp-strip %
{_strip}
/usr/lib/rpm/redhat/brp-strip-comment-note %{_strip} %{_objdump}
}
...
I didn't find anywhere in use of __spec_install_post.
Is this macro directly invoked by rpmbuild? A Documentation will be great.
It seems there's many a lot 'leaf like' macro in invoke chain. that's confusing.
I only found some references in some bugs and stuff; these seem to be magic undocumented macros that are run at certain stages, e.g. for yours right "post" the "install" stanza of your specfile.
An actual list would be great, but I think you'll only find what you already had by digging in /[usr]/lib/rpm/macros etc.
I have the following piece of code in a ClojureScript project :
(ns project.lib
(:require [cljs.test :refer-macros [is]]))
(defn my-fn [p]
{:pre [(is (#{:allowed-key :another-allowed-key} p))]}
;;...
)
I would like to know if I can control the behaviour of the :pre and :post assertions, and generally what is the way to make sure that some code related to parameter checking is not included.
Note : I am aware of the :closure-define compiler option, but not sure how to apply it to this specific case.
You can set the compiler option :elide-asserts to true to eliminate all assertions, including :pre and :post assertions.
This flag is independent of :advanced and needs to be set even under that mode to eliminate the assertions from production code.
See https://github.com/clojure/clojurescript/wiki/Compiler-Options#elide-asserts
Also note that, generally, the cljs.test namespace would only be used in unit test namespaces, which would be placed in a separate directory (perhaps under "test" as opposed to to "src") and, if using lein, you would use :source-paths so as to not include the tests in your production builds.
Having said that, using :pre and :post is perfectly fine in production code—just use "regular" predicates instead of the cljs.test is macro. For your specific example, is could be eliminated as the precondition simply needs to evaluate to something truthy.
I am trying to use figwheel in my ClojureScript build.
It works with lein cljsbuild auto already, but I have to put :optimisations :whitespace.
Otherwise I get a message in the browser :
Uncaught ReferenceError: goog is not defined
However figwheel require :optimisations :none to run. Here is the part of my leiningen file :
:cljsbuild {
:builds
[{:id "dev"
:source-paths ["src/cljs"]
:figwheel { :websocket-host "localhost"
;;:on-jsload "example.core/fig-reload"
:autoload true
:heads-up-display true
:load-warninged-code true
;;:url-rewriter "example.core/fig-url-rewrite"
}
:compiler {;; :main
:output-to "resources/public/js/gdb/gdb.js"
:output-dir "resources/public/js/gdb/cljsbuild-dev"
;;:asset-path "js/out"
:optimizations :none
:source-map "resources/public/js/gdb/gdb.js.map"
:pretty-print true}}]}
What is missing for me to get the missing dependencies ?
It turns out this is a classic case of RTFM.
The answer was in the ClojureScript quickstart guide.
Specifically, I had to add a :main field, as specified in the Less Boilerplate section :
:main "example.core"
Nothing jumps out as being obviously wrong or missing. However, lein is pretty powerful in the degree it lets you set things to fit your personal taste/workflow, so it is hard to spot things if the approach is signficantly different.
When I run into these types of problems, I find using the standard templates provided by many libraries or projects really useful. My recommendation would be to run
lein new figwheel ft -- --reagent
This will setup a basic project called ft (in this case also with reagent - there is another option for om or you could leave all of this out for a bare bones default. See the figwheel repo on github for more details. This will provide a good working lein figwheel setup which hyou can use as a guide.
It seems like a very common issue with SSIS packages is releasing a package to Production that ends up with running the wrong connectionstring parameters. This could happen by making any one of many mistakes or ommisions. As a result, I find it helpful to dump all ConnectionString values to a log file. This helps me understand what connectionstrings were actually applied to the package at run time.
Now, I am considering having my packages check to see if every connnection object in my package had its connectionstring overriden by an entry in the config file and if not, return a warning or even fail the package. This is to allow easier configuration by extracting all environment variables to a config file. If a connectionstring is never overridden, this risks that a package, when run in production, may use development settings or a package, when run in a non production setting when testing, may accidentily be run against production.
I'd like to borrow from anyone who may have tried to do this. I'd also be interested in suggestions on how to accomplish this with minimal work.
Thx
Technical question 1 - what are my connection string
This is an easy question to answer. In your package, add a Script Task and enumerate through the Connections collection. I fire the OnInformation event and if I had this scheduled, I'd be sure to have the /rep iew options in my dtexec to ensure I record Information, Errors and Warnings.
namespace TurnDownForWhat
{
using System;
using System.Data;
using Microsoft.SqlServer.Dts.Runtime;
using System.Windows.Forms;
/// <summary>
/// ScriptMain is the entry point class of the script. Do not change the name, attributes,
/// or parent of this class.
/// </summary>
[Microsoft.SqlServer.Dts.Tasks.ScriptTask.SSISScriptTaskEntryPointAttribute]
public partial class ScriptMain : Microsoft.SqlServer.Dts.Tasks.ScriptTask.VSTARTScriptObjectModelBase
{
public void Main()
{
bool fireAgain = false;
foreach (var item in Dts.Connections)
{
Dts.Events.FireInformation(0, "SCR Enumerate Connections", string.Format("{0}->{1}", item.Name, item.ConnectionString), string.Empty, 0, ref fireAgain);
}
Dts.TaskResult = (int)ScriptResults.Success;
}
enum ScriptResults
{
Success = Microsoft.SqlServer.Dts.Runtime.DTSExecResult.Success,
Failure = Microsoft.SqlServer.Dts.Runtime.DTSExecResult.Failure
};
}
}
Running that on my package, I can see I had two Connection managers, CM_FF and CM_OLE along with their connection strings.
Information: 0x0 at SCR Enum, SCR Enumerate Connections: CM_FF->C:\ssisdata\dba_72929.csv
Information: 0x0 at SCR Enum, SCR Enumerate Connections: CM_OLE->Data Source=localhost\dev2012;Initial Catalog=tempdb;Provider=SQLNCLI11;Integrated Security=SSPI;
Add that to ... your OnPreExecute event for all the packages and no one sees it but every reports back.
Technical question 2 - Missed configurations
I'm not aware of anything that will allow a package to know it's under configuration. I'm sure there's an event as you will see in your Information/Warning messages that a package attempted to apply a configuration, didn't find one and is going to retain it's design time value. Information - I'm configuring X via Y. Warning - tried to configure X but didn't find Y. But how to have a package inspect itself to find that out, I have no idea.
That said, I've seen reference to a property that fails package on missed configuration. I'm not seeing it now, but I'm certain it exists in some crevice. You can supply the /w parameter to dtexec which treats warnings as errors and really, warnings are just errors that haven't grown up yet.
Unspoken issue 1 - Permissions
I had a friend who botched an XML config file as part of their production deploy. Their production server started consuming data from a dev server. Bad things happened. It sounds like you have had a similar situation. The resolution is easy, insulate your environments. Are you using the same service account for your production class SQL Server boxes and dev/test/uat/qa/load/etc? STOP. Make a new one. Don't allow prod to talk to any boxes that aren't in their tier of service. Someone bones a package and doesn't set a configuration? First of all, you'll catch it when it goes from dev to something-before-production because that tier wouldn't be able to talk to anything else that's not that level. But if you're in the ultra cheap shop and you've only got dev and prod, so be it. Non-configured package goes to prod. Prod SQL Agent fires off the package. Package uses default connection manager and fails validation because it can't talk to the dev sales database.
Unspoken issue 2 - template
What's your process when you have a new package to build? Does your team really start from scratch? There are so many ways to solve this problem but the core concept is to define your best practices for Configuration, Logging, Package Protection Level, Transaction levels, etc into some easily consumable form. Maybe that's 3 starter packages: one for raw acquisition, maybe one stages and conforms the data and the last one moves data from conformed into the final destination. Teammates then simply have to pick one to start from and fill in the spots that need it. If they choose to do their own thing, that's the stick you beat them with when their package fails to run in production because they didn't follow the standard path.
There are other approaches here. If you're a strong .NET crew, you can gen your template packages that way. At this point, I create my templates with Biml and use that to drive basic package creation.
If I am understanding you correctly the below solution should work.
My suggestion to you is to turn on the Do not save sensitive option for the ProtectionLevel property at the top level of the package.
This will require you to use package configurations for every connection, otherwise it will not have the credentials to make a connection.
I might be asking something trivial, but what I've tried doesn't seem to work. With my "MAIN" appender, I want to log all "info"s everywhere, except in a third-party package (let's call it boring), which produces too many of them (so I look at warnings only). Additionally, I want to log "debug"s in my interesting package. This works fine with the following logback.groovy:
root(INFO, ["MAIN"])
logger("interesting", DEBUG, ["MAIN"])
logger("boring", WARN, ["MAIN"])
Now I want to configure a different appender logging one level more like
root(DEBUG, ["DETAIL"])
logger("interesting", TRACE, ["DETAIL"])
logger("boring", INFO, ["DETAIL"])
This also works, but when I put both together, it doesn't. I can imagine that this is caused by the fact that each Logger is either on or off for a given level. I'm aware that for the behavior I want, the loggers in the "boring" package must be on the INFO level (because of the DETAIL appender) and that the messages for the MAIN appender are to be filtered, but I can't see how to configure this.
UPDATE
I see I was doing nearly everything wrong. The line
logger("interesting", DEBUG, ["MAIN"])
says nothing like "set the level to DEBUG for the MAIN appender and the package interesting and below"), it does two independent things instead:
set the level to DEBUG for the package interesting and below
add the MAIN appender to the "interesting" Logger
This seems to be impossible without writing an own filter, which is fortunately pretty easy. I ended up with something like
root(DEBUG, ["DETAIL", "MAIN"])
// settings for the most detailed appender
logger("interesting", TRACE)
logger("boring", INFO)
appender("MAIN", ...) {
...
filter = new MyFilter()
.deny("boring", INFO)
.accept("interesting", DEBUG)
.deny("", DEBUG)
}
where deny and accept are my methods adding entries to MyFilter. The entries are evaluated sequentially, i.e.,
in package boring and below, everything with level INFO or below gets denied
in package interesting and below, everything with level DEBUG or above gets accepted
otherwise, in any package, everything with level DEBUG or above gets denied
Inspired by this question.