Note: this is not a dupe of this or this other question. Read on: this question is specific to the Code-Sharing template.
I am doing some pretty basic experiments with NativeScript, Angular and the code sharing templates (see: #nativescript/schematics).
Now I am doing some exploration / poc work on how different "build configuration" are supported by the framework. To be clear, I am searching for a simple -and hopefully official- way to have the application use a different version of a specific file (let's call it configuration.ts) based on the current platform (web/ios/android) and environment (development/production/staging?).
Doing the first part is obviously trivial - after all that is the prime purpose of the code sharing schematics. So, different versions of the same file are identified by different extensions. This page explain things pretty simply.
What I don't get as easily is if the framework/template supports any similar convention-based rule that can be used to switch between debug/release (or even better development/staging/production) versions of a file. Think for example of a config.ts file that contains different parameters based on the environment.
I have done some research in the topic, but I was unable to find a conclusive answer:
the old and now retired documentation for the appbuilder platform mentions a (.debug. and .release.) naming convention for files. I don't think this work anymore.
other sources mention passing parameters during the call to tns build / tns run and then fetching them via webpack env variable... See here. This may work, but seems oddly convoluted
third option that gets mentioned is to use hooks to customize the build (or use a plugin that should do the same)
lastly, for some odd reason, the #nativescript/schematics seems to generate a default project that contains two files called environment.ts and environment.prod.ts. I suspect those only work for the web version of the project (read: ng serve) - I wasn't able to get the mobile compiler to recognize files that end with debug.ts, prod.ts or release.ts
While it may be possible that what I am trying to do isn't just supported (yet?), the general confusion an dissenting opinions on the matter make me think I may be missing something.. somewhere.
In case this IS somehow supported, I also wonder how it may integrate with the NativeScript Sidekick app that is often suggested as a tool to ease the build/run process of NativeScript applications (there is no way to specify additional parameters for the tns commands that the Sidekick automates, the only options available are switching between debug/release mode), but this is probably better to be left for another question.
Environment files are not yet supported, passing environment variables from build command could be the viable solution for now.
But of course, you may write your own schematics if you like immediate support for environment files.
I did not look into sharing environment files between web and mobile yet - I do like Manoj's suggestion regarding modifying the schematics, but I'll have to cross that bridge when I get there I guess. I might have an answer to your second question regarding Sidekick. The latest version does support "Webpack" build option which seems to pass the --bundle parameter to tns. The caveat is that this option seems to be more sensitive to typescript errors, even relatively benign ones, so you have to be careful and make sure to fix them all prior to building. In my case I had to lock the version of #types/jasmine in package.json to "2.8.6" in order to avoid some incompatibility between that and the version of typescript that Sidekick's cloud solution is using. Another hint is to check "Clean Build" after npm dependency changes are made. Good luck!
Anyone knows what happened to the very useful HTML::Mail package which used to be on CPAN? The package gave you the possibility of sending web pages as HTML attachment, with various options for inlining embedded content such as css and images. The package integrated nicely with whatever mail infrastructure your server had (postfix, sendmail,...).
I am moving a big old Perl driven website from Fedora into AWS and Ubuntu, and HTML::Mail is the only package I am not able to find. I can't seem to find any single substitute either, so any pointers are welcomed.
Thanks in advance.
It still seems to be there. But the fact that the module name is in red (which means it's unindexed for some reason) and the module hasn't been updated since 2008 would make me wary of using it.
There's a review of it on CPAN Ratings which says:
This module is like MIME::Lite::HTML or Email::MIME::CreateHTML. Like
the former, it uses MIME::Lite, and for that reason alone I suggest
you avoid it. MIME::Lite is quite buggy, and best avoided in all
situations. Investigate, instead, Email::MIME::CreateHTML.
So I'd look at Email::MIME::CreateHTML instead.
I am working on a Rails app, and a designer is designing the raw HTML pages separately. Its tough to get his environment set up to use the application directly, so I would like to be able to somehow "store" the HTML of all of the pages that my application generates, to a directory somewhere to that I can pass the curent version off to the designer.
Does anyone know of a gem or rake task that would help me do something likee this?
I am also open to other suggestions for working in parallel with designers who don't know rails.
Thanks
Edit
I guess an amendment to my question, would be, does anyone also know of ways of generating the list of page links to feed to wget, other than going through them by hand
Edit 2
Just thinking out loud... to generate every possible page in an app, you'd need to call every action in every controller. So i'd need a program to find which controllers exist in all of my app/gems/plugins, and then find all of the public methods in them.. Or.. maybe I could just use the actions that are routable from the list of routes
Then, you might want to filter out the actions that didn't render html
Then you might want to filter out destructive actions (unless this program ran in a test environment, and rebuilt the system every time).
Then as many actions depend on the parameters that are supplied, you'd need to have control over which parameters are sent to each action...
Then you'd also have to be able to send session cookies to log in
what else..
wget -m http://somewhere.com
This command will fetch all the files / pages from http://somewhere.com and download them to a local directory, to form a local "mirror."
-m
--mirror
Turn on options suitable for mirroring. This option turns
on recursion and time-stamping, sets infinite recursion depth and
keeps FTP directory listings. It is currently equivalent to -r -N -l
inf --no-remove-listing.
Note: I don't believe Mac OS X ships with wget. If you are using a Mac, I'd suggest installing Homebrew and then running brew install wget.
Read more: man wget
Ok, I've asked a few related questions here and only ended up with more questions and I realized now it's because I don't have enough background info. So I'll make it more generic:
I need to make a simple web application. Static HTML/JQuery pages will send AJAX POST requests to some server side code, which will:
Read the POST variables passed in
Run some very simple logic
Hit a MySQL database for simple CRUD ops
Return a plain string of data to be consumed by the javascript on the page
I was assuming Ruby was a good choice for this as everyone is raving about how well it's designed, and I've been playing with it - not RoR, just Ruby for simple scripting tasks - and I kind of like it.
My question is, I'm hopelessly confused by the trillion helper libraries and frameworks out there. I don't know what these are and therefore if I need any/all of them: Rack, Sinatra, Camping, mod_ruby, FastCGI, etc.
Would it be easier to just learn PHP and us that? Or can I get away with just dropping my .rb files into the cgi-bin folder(I'm using Apache for hosting) and use the ruby cgi library to get my variables?
EDIT: As far as Rails, I'm just assuming that it's overkill for what I want but I might be wrong. I looked at it, and it seemed cool for generating data based web sites quickly, but that's not what I'm trying to do. I don't want any forms pages for the user. I don't want them entering data or viewing records. I don't even want to return any HTML. I just want a ruby script to sit on the server, get passed a few variables in a post request, and return a JSON string in response. I will need some basic cookie/session/state managment
This is a really easy thing to do in C# and ASP.NET with webservices, but it seems very confusing with the open source technologies.
You don't want to use any feature from a fully blown framework so don't use one. Less code = less bugs = less security nightmares.
CGI
CGI has some performance drawbacks in comparison to other methods, but is still (in my opinion) the simplest and easiest to use one. This is how you use the builtin cgi library:
require "cgi"
cgi= CGI.new
answer= evaluate(cgi.params)
cgi.out do
answer
end
rack
Another low tech easy to use variant would be rack. Rack is an abstraction layer which works for many webserver interfaces (cgi, fastcgi, webrick, …). It's simplicity can be compared to the one of only using cgi. Put the following into a file wich ends with .ru into your cgi directory.
#!/usr/bin/rackup
require "rack/request"
run (lambda do |env|
request= Rack::Request(env)
anwser= evaluate(request.params)
return [200, {}, answer]
end)
This does not seem very different from cgi, but it gives you much more possibilities. If youst execute this file on your local machine rackup will start the webrick webserver. This webserver will deliver the webpages you described in your .ru file.
Other interfaces
fast-cgi
fast-cgi works almost like CGI. The difference is, in CGI your script get's started for every request it has to work on. With fast-cgi, your script only starts once for all requests. There is a library available to write fast-cgi script in ruby.
mod_ruby
mod_ruby is a builtin ruby interpreter for apache. It works analog to mod_php in apache.
mongrel
mongrel is a standalone webserver for ruby applications. This is a simple hello world example with it.
require 'mongrel'
class SimpleHandler < Mongrel::HttpHandler
def process(request, response)
response.start(200) do |head,out|
head["Content-Type"] = "text/plain"
out.write("hello world!\n")
end
end
end
h = Mongrel::HttpServer.new("0.0.0.0", "3000")
h.register("/hello", SimpleHandler.new)
h.run.join
Mongrel is often used for rails and other ruby frameworks. Most people use an apache or something else on port 80. This webserver than distributes the requests to several mongrel servers running on other ports. I think this is totaly overkill for your needs.
phusion passenger
passenger is also called mod_rails or mod_rack. It is a module for apache and nginx to host rails and rack applications. According to their websites rails with passenger uses 1/3 less ram than rails alone. If you write your software for rack, you could make it a little faster by using passenger, instead of cgi or fast-cgi.
Use jQuery and PHP.
Both technologies are well documented, and you should be able to get an application up and running in a matter of hours. You sound like you know a thing or two - talking about CRUD operations and so on - so I won't bore you with examples. And as far as JSON goes, there are probably a million PHP libraries out there, for outputting JSON objects.
Sinatra is very simple to learn and use. It's also easy to deploy with the use of Phusion Passenger (which is like mod_php for ruby frameworks like Rails and Sinatra). Instructions here: http://blog.squarefour.net/2009/03/06/deploying-sinatra-on-passenger/
If you find that you need more than what Sinatra will give you, I recommend Rails. Setting up that with Passenger is even easier since hardly any configuration is required. (see modrails.com).
PHP is very easy to use because it's designed specifically for this sort of thing. Want to read POST variables? They're in $_POST. Want to query MySQL? mysql_query("SELECT `something` FROM `table`");. And if you ever need help, Google searches for "php what_you_need_to_do" almost always return results on php.net, which is very helpful.
And for what you're doing, you don't need any additional frameworks.
I am curious about what I perceive to be your resistance to trying Rails. You say that you want "to spend more time on the scripting itself and less on configuration", and yet you seem to dismiss Rails out of hand. Rails is all about convention over configuration. If you take the time to learn how Rails does things, you can get an incredible amount of functionality "for free" just by following the conventions of the framework.
If you want to make a simple web app, Rails is really a very painless and good way to start. You can use a sqlite database and not even mess with MySQL (won't scale, but for learning or simple apps it's fine). It's true that there are simpler frameworks, but since you seem new to web programming, I'd recommend that you start with what will get you the most support in terms of documentation and knowledgeable folks. Follow the old adage: Get it working first, then optimize later.
The only sticking point I can see is the Apache integration... The consensus on Rails deployment these days seems to be focused on using lightweight httpds in place of Apache. There is a mod_fcgid which seems to be the best way to do it with Apache (mod_ruby is deprecated, buggy and slow, last I read) if you can do custom mods. Or there's Phusion Passenger, which seems to be the latest and greatest way to do it. Running Rails in a standard CGI environment will yield awful performance (but that goes for any CGI framework, really) due to the overhead of executing the interpreter + framework for every request. You'll get much better performance if you go with something that keeps the interpreter + framework in memory.
I personally like Django. I had a problem with Ruby on Rails where I just got overwhelmed by everything when I just wanted to do something simple, which it sounds like you want to do (since you said ROR feels like overkill). The cool thing I found with Django is that if you WANT everything, then you can get everything by plugging it in...but if you want less then you just don't plug in that technology and it's that much more lightweight.
Take for example "views". Django, like ROR, uses MVC. But if you just want to return a string of data and don't need the view, then you don't need to plug in the view. But if later on you decide that it will be more organized in a view then you can easily plug it in at that time.
Here's their website: http://www.djangoproject.com/
What are your opinions on developing for the command line first, then adding a GUI on after the fact by simply calling the command line methods?
eg.
W:\ todo AddTask "meeting with John, re: login peer review" "John's office" "2008-08-22" "14:00"
loads todo.exe and calls a function called AddTask that does some validation and throws the meeting in a database.
Eventually you add in a screen for this:
============================================================
Event: [meeting with John, re: login peer review]
Location: [John's office]
Date: [Fri. Aug. 22, 2008]
Time: [ 2:00 PM]
[Clear] [Submit]
============================================================
When you click submit, it calls the same AddTask function.
Is this considered:
a good way to code
just for the newbies
horrendous!.
Addendum:
I'm noticing a trend here for "shared library called by both the GUI and CLI executables." Is there some compelling reason why they would have to be separated, other than maybe the size of the binaries themselves?
Why not just call the same executable in different ways:
"todo /G" when you want the full-on graphical interface
"todo /I" for an interactive prompt within todo.exe (scripting, etc)
plain old "todo <function>" when you just want to do one thing and be done with it.
Addendum 2:
It was mentioned that "the way [I've] described things, you [would] need to spawn an executable every time the GUI needs to do something."
Again, this wasn't my intent. When I mentioned that the example GUI called "the same AddTask function," I didn't mean the GUI called the command line program each time. I agree that would be totally nasty. I had intended (see first addendum) that this all be held in a single executable, since it was a tiny example, but I don't think my phrasing necessarily precluded a shared library.
Also, I'd like to thank all of you for your input. This is something that keeps popping back in my mind and I appreciate the wisdom of your experience.
I would go with building a library with a command line application that links to it. Afterwards, you can create a GUI that links to the same library. Calling a command line from a GUI spawns external processes for each command and is more disruptive to the OS.
Also, with a library you can easily do unit tests for the functionality.
But even as long as your functional code is separate from your command line interpreter, then you can just re-use the source for a GUI without having the two kinds at once to perform an operation.
Put the shared functionality in a library, then write a command-line and a GUI front-end for it. That way your layer transition isn't tied to the command-line.
(Also, this way adds another security concern: shouldn't the GUI first have to make sure it's the RIGHT todo.exe that is being called?)
Joel wrote an article contrasting this ("unix-style") development to the GUI first ("Windows-style") method a few years back. He called it Biculturalism.
I think on Windows it will become normal (if it hasn't already) to wrap your logic into .NET assemblies, which you can then access from both a GUI and a PowerShell provider. That way you get the best of both worlds.
My technique for programming backend functionality first without having the need for an explicit UI (especially when the UI isn't my job yet, e.g., I'm desigining a web application that is still in the design phase) is to write unit tests.
That way I don't even need to write a console application to mock the output of my backend code -- it's all in the tests, and unlike your console app I don't have to throw the code for the tests away because they still are useful later.
I think it depends on what type of application you are developing. Designing for the command line puts you on the fast track to what Alan Cooper refers to as "Implementation Model" in The Inmates are Running the Asylum. The result is a user interface that is unintuitive and difficult to use.
37signals also advocates designing your user interface first in Getting Real. Remember, for all intents and purposes, in the majority of applications, the user interface is the program. The back end code is just there to support it.
It's probably better to start with a command line first to make sure you have the functionality correct. If your main users can't (or won't) use the command line then you can add a GUI on top of your work.
This will make your app better suited for scripting as well as limiting the amount of upfront Bikeshedding so you can get to the actual solution faster.
If you plan to keep your command-line version of your app then I don't see a problem with doing it this way - it's not time wasted. You'll still end up coding the main functionality of your app for the command-line and so you'll have a large chunk of the work done.
I don't see working this way as being a barrier to a nice UI - you've still got the time to add one and make is usable etc.
I guess this way of working would only really work if you intend for your finished app to have both command-line and GUI variants. It's easy enough to mock a UI and build your functionality into that and then beautify the UI later.
Agree with Stu: your base functionality should be in a library that is called from the command-line and GUI code. Calling the executable from the UI is unnecessary overhead at runtime.
#jcarrascal
I don't see why this has to make the GUI "bad?"
My thought would be that it would force you to think about what the "business" logic actually needs to accomplish, without worrying too much about things being pretty. Once you know what it should/can do, you can build your interface around that in whatever way makes the most sense.
Side note: Not to start a separate topic, but what is the preferred way to address answers to/comments on your questions? I considered both this, and editing the question itself.
I did exactly this on one tool I wrote, and it worked great. The end result is a scriptable tool that can also be used via a GUI.
I do agree with the sentiment that you should ensure the GUI is easy and intuitive to use, so it might be wise to even develop both at the same time... a little command line feature followed by a GUI wrapper to ensure you are doing things intuitively.
If you are true to implementing both equally, the result is an app that can be used in an automated manner, which I think is very powerful for power users.
I usually start with a class library and a separate, really crappy and basic GUI. As the Command Line involves parsing the Command Line, I feel like i'm adding a lot of unneccessary overhead.
As a Bonus, this gives an MVC-like approach, as all the "real" code is in a Class Library. Of course, at a later stage, Refactoring the library together with a real GUI into one EXE is also an option.
If you do your development right, then it should be relatively easy to switch to a GUI later on in the project. The problem is that it's kinda difficult to get it right.
Kinda depends on your goal for the program, but yeah i do this from time to time - it's quicker to code, easier to debug, and easier to write quick and dirty test cases for. And so long as i structure my code properly, i can go back and tack on a GUI later without too much work.
To those suggesting that this technique will result in horrible, unusable UIs: You're right. Writing a command-line utility is a terrible way to design a GUI. Take note, everyone out there thinking of writing a UI that isn't a CLUI - don't prototype it as a CLUI.
But, if you're writing new code that does not itself depend on a UI, then go for it.
A better approach might be to develop the logic as a lib with a well defined API and, at the dev stage, no interface (or a hard coded interface) then you can wright the CLI or GUI later
I would not do this for a couple of reasons.
Design:
A GUI and a CLI are two different interfaces used to access an underlying implementation. They are generally used for different purposes (GUI is for a live user, CLI is usually accessed by scripting) and can often have different requirements. Coupling the two together is not a wise choice and is bound to cause you trouble down the road.
Performance:
The way you've described things, you need to spawn an executable every time the GUI needs to d o something. This is just plain ugly.
The right way to do this is to put the implementation in a library that's called by both the CLI and the GUI.
John Gruber had a good post about the concept of adding a GUI to a program not designed for one: Ronco Spray-On Usability
Summary: It doesn't work. If usability isn't designed into an application from the beginning, adding it later is more work than anyone is willing to do.
#Maudite
The command-line app will check params up front and the GUI won't - but they'll still be checking the same params and inputting them into some generic worker functions.
Still the same goal. I don't see the command-line version affecting the quality of the GUI one.
Do a program that you expose as a web-service. then do the gui and command line to call the same web service. This approach also allows you to make a web-gui, and also to provide the functionality as SaaS to extranet partners, and/or to better secure the business logic.
This also allows your program to more easily participate in a SOA environement.
For the web-service, don't go overboard. do yaml or xml-rpc. Keep it simple.
In addition to what Stu said, having a shared library will allow you to use it from web applications as well. Or even from an IDE plugin.
There are several reasons why doing it this way is not a good idea. A lot of them have been mentioned, so I'll just stick with one specific point.
Command-line tools are usually not interactive at all, while GUI's are. This is a fundamental difference. This is for example painful for long-running tasks.
Your command-line tool will at best print out some kind of progress information - newlines, a textual progress bar, a bunch of output, ... Any kind of error it can only output to the console.
Now you want to slap a GUI on top of that, what do you do ? Parse the output of your long-running command line tool ? Scan for WARNING and ERROR in that output to throw up a dialog box ?
At best, most UI's built this way throw up a pulsating busy bar for as long as the command runs, then show you a success or failure dialog when the command exits. Sadly, this is how a lot of UNIX GUI programs are thrown together, making it a terrible user experience.
Most repliers here are correct in saying that you should probably abstract the actual functionality of your program into a library, then write a command-line interface and the GUI at the same time for it. All your business logic should be in your library, and either UI (yes, a command line is a UI) should only do whatever is necessary to interface between your business logic and your UI.
A command line is too poor a UI to make sure you develop your library good enough for GUI use later. You should start with both from the get-go, or start with the GUI programming. It's easy to add a command line interface to a library developed for a GUI, but it's a lot harder the other way around, precisely because of all the interactive features the GUI will need (reporting, progress, error dialogs, i18n, ...)
Command line tools generate less events then GUI apps and usually check all the params before starting. This will limit your gui because for a gui, it could make more sense to ask for the params as your program works or afterwards.
If you don't care about the GUI then don't worry about it. If the end result will be a gui, make the gui first, then do the command line version. Or you could work on both at the same time.
--Massive edit--
After spending some time on my current project, I feel as though I have come full circle from my previous answer. I think it is better to do the command line first and then wrap a gui on it. If you need to, I think you can make a great gui afterwards. By doing the command line first, you get all of the arguments down first so there is no surprises (until the requirements change) when you are doing the UI/UX.
That is exactly one of my most important realizations about coding and I wish more people would take such approach.
Just one minor clarification: The GUI should not be a wrapper around the command line. Instead one should be able to drive the core of the program from either a GUI or a command line. At least at the beginning and just basic operations.
When is this a great idea?
When you want to make sure that your domain implementation is independent of the GUI framework. You want to code around the framework not into the framework
When is this a bad idea?
When you are sure your framework will never die