Deis Workflow support for non-12-factor services - mysql

I use Deis Workflow, which is an open source Platform as a Service (PaaS) that makes it easy to deploy and manage applications on our servers.
I understand twelve-factor is the main guideline for Deis Workflow, but is it possible to use it to create services like Postgres, Redis or MySQL?
Some other PaaS services e.g. Dokku and Flynn allow users to create services and link them to the app containers.
Is there a way to acheive the same result in Deis Workflow?

I'm an engineer at Deis, formerly from the Workflow team, and still occasionally involved in it. Great question. As it seems you already caught on to, Workflow is (currently) hyper-focused on 12factor applications. Generally, what we have said is that anyone wishing to do anything more complex than that may wish to "fall back" on "plain Kubernetes," but that doesn't have to be as painful as it might sound when you take Helm into account. Helm is the Kubernetes package manager (and is another Deis product). Helm 2 just went GA today, in fact. It's easy to create your own Helm charts (packages), but even better than that, many charts already exist for common things like Postgres, Redis, and MySQL (all examples you gave). Hope this helps.

I am Anton - one of the maintainers of Hephy, the open source fork of Deis Workflow. https://github.com/teamhephy
Deis Workflow was originally designed with hyper focus on 12-factor apps and deploying them. We don't see any major changes to that in the coming few months except the possibility to define multiple services per application namespaces. See this PR: https://github.com/teamhephy/controller/pull/71
Aside from all of this, we hope to integrate other services that provide DBaaS (Databases as a Service) and do some blog posts on how to use Hephy Workflow and those services together for a common solution.

Related

about cloudfoundry and openshift

I want to build my own pass platform based on cloudfoundry and openshift. I want to use some of the functions of these two platforms, and I don't want to deploy them all in the environment. Is this feasible? What similar open source projects can learn from?
Let me produce some contents about OpenShift for you as follows.
OpenShift Online : Free plan is enough to your first training.
OpenShift HandsOn training : Awesome practical training, it need not to prepare your env.
OpenShift Documentation - Enterprise and OpenShift OpenSource AKA OKD - Documentation
If you'd like to deploy to your on-premise as open source project of OpenShift, you can review/test/operate the OKD (former name: OpenShift Origin).
I hope if help you. :^)
In regards to Cloud Foundry, it is just a collection of services. We use Bosh to deploy Cloud Foundry, which knows how to deploy all the services so that they can talk to each other & function cohesively. There's nothing that would prevent you from using a different Bosh configuration (or even totally different tool) to deploy these services in a different way.
You can run projects like Gorouter, UAA, Cloud Controller and Garden stand-alone. The individual project sites typically have instructions for doing this.
Ex:
https://github.com/cloudfoundry/gorouter#start
https://github.com/cloudfoundry/uaa#quick-start
Other components might be a little trickier as they depend on each other. Diego, for example, depends on Garden and is built to send logs through Loggregator. In these cases, you might need to do a little work if you didn't want to use one of the dependent components.
https://github.com/cloudfoundry/diego-design-notes#what-are-all-these-repos-and-what-do-they-do
I would disagree with your comment about these systems being bloated, and say that depends on your perspective. If you don't need a lot of the features, then I could see why you might think that. I'd say overkill might be a better way to put it though.
If you don't need all the functionality that PaaS platforms provide, you could look at other options: Dokku, Kubernetes, Knative, etc... You don't get all the features of CF, but the systems have smaller footprints. If you can live without the extra features, then these might be better options for you.
Hope that helps!

Production vs QA configuration

Time and again I am faced with the issue of having multiple environments that must be configured individually for an application that would run in all of them (e.g. QA, regional production env's, dev, staging, etc.) and I am wondering what would be the best way to organize different configurations?
Would it be in the database? Different configuration files per environment? Or maybe the same file with different sections/xml tags? How would these be then deployed? Embedded within the app? Or put manually in after installation to be modified in-place?
This question is not technology-specific - I've worked with .net and Java, web-apps and desktop apps and this issue comes up time and again. I'm looking to learn different approaches to maybe adapt a hybrid to address this.
EDIT: There's one caveat that I must point out - when configuration is part of the deployed solution, it is generally installed under root user on the host. In large organizations developers usually don't have a root access to production hosts so any changes to the configuration require a new build and deployment. Needless to say this isn't the nicest approach - especially at organizations that have a very strict release process involving multiple teams and approval levels... (sigh I know!)
Borrowed from Jez Humble and David Farley's book "Continuous Delivery (page 41)", you can:
Your build scripts can pull configuration in and incorporate it into your binaries at build time.
Your packaging software can inject configuration at packaging time, such as when creating assemblies, ears, or gems.
Your deployment scripts or installers can fetch the necessary information or ask the user for it and pass it to your application at
deployment time as part of the installation process.
Your application itself can fetch configuration at startup time or run time.
It is considered bad practice by them to inject configuration files in build and compile times, because you should be able to deploy the same binary file to every environments.
My experience was that you could bake all configuration files for every environments (except sensitive information) to your deployment file (war, jar, zip, etc). And you design your application to take in an extra parameter when starts, to pickup the right sets of configuration files (from your extracted deployment file, or from local/remote file system if they are sensitive, or from a database) during application's startup time.
The question is difficult to answer because it's somewhat vague. There is no technology-agnostic approach to configuration as far as I know. Exactly how configuration is set up will depend on the language/technology in question.
I'm not familiar with .net but with java a popular approach is to have a maven build set up with different profiles. Each profile is specific to an environment. You can then define different properties files that have environment-specific values, an example from the above link is:
environment.properties - This is the default configuration and will be packaged in the artifact by default.
environment.test.properties - This is the variant for the test environment.
environment.prod.properties - This is basically the same as the test variant and will be used in the production environment.
You can then build your project as follows:
mvn -Pprod package
I have good news and bad news.
The good news is that Config4* (of which I am the maintainer) neatly addresses this issue with its support for adaptive configuration. Basically, this is the ability for a configuration file to adapt itself to the environment (including hostname, username, environment variables, and command-line options) in which it is running. Read Chapter 2 of the "Getting Started" manual for details. Don't worry: it is a short chapter.
The bad news is that, currently, Config4* implementations exist only for C++ and Java, so your .Net applications are out of luck. And even with C++ and Java applications, it won't make pragmatic sense to retrofit Config4* into an existing application. Because of this, I'd advise trying to use Config4* only in new applications.
Despite the bad news, I think it is worth your while to read the above-mentioned chapter of the Config4* documentation, because doing so may provide you with ideas that you can adapt to fit your needs.

BMC Remedy Integration

Where can I find a list of BMC Remedy 3rd party integrations? I have found nothing on their website, and their sales department put me in touch with the customer services which wouldn't take my call because I didn't have a customer number.
My company is looking into using BMC Remedy as a customer incident system, and it would be nice if I could integrate it with some software. For example, we could have an internal development tracking system such as Jira, Redmine, MantisBT, Trak, etc. which would integrate with Remedy. Or, have Rememdy itself integrate with something like Hudson or CruiseControl.
So far, I've found nothing that seems to integrate with Remedy -- even with software packages that have a ton of integrations like Hudson and Jira. I don't really care if there are third party proprietary integrations, but I'd like to make sure they already exist and not All you have to do is hire someone at $400 to program everything for you. I want to make sure that there is something now and not be promised it can be done, then find out you really can't do it.
I may be a bit late to the party here, but I wanted to make this info available for anybody who happened to be searching for this answer in the future. BMC Remedy has an API in Java, which uses a native library in C, as well as bindings for Perl and other languages capable of calling native code. If you can integrate with any of those languages, you can write a custom integration program and integrate with that. As 'Gary L' mentioned, Remedy can also expose any form as a web service, which, in my experience, have simple interfaces.
Since the original question was asked, BMC have created a doc with a wealth of information on their Wiki. A Swedish company, RRR, has also collected every version of the Remedy Java API and required native libraries on a single page. It appear that you no longer need a support ID to access these pages and download the API files.
Hopefully somebody finds this helpful!
Your definition of "integrate" is different from their version. Their version of integration means that if a source system exposes its data, then you can configure ARS to retrieve that information and map it to classes (forms) within their system. They have a "generic" integration system that you have to customize. It has three broad areas:
If you can connect directly to a 3rd party database and see its schema, then you can perform
retrievals of that information. We use Oracle today.
They have a java API that allows you access the ARS system for custom code (I do a lot of this).
Flat CSV file importation of data from a source system into ARS (after export).
I looked at their online support for the systems you mention (Jira, Redmine, MantisBT, Trak) and do not see anything that would accomplish any of the three above without your own customizations. With the work that I've done on this system it doesn't surprise me.
I work on a project today that writes custom code doing the items above. It is a system that is configuration/development heavy for us. Your comment: "All you have to do is hire someone at $400 to program everything for you." is not too far off from what we have to do with the system.
There is another option for Remedy integration: Web Services.
BMC Remedy makes it easy to create web services (WSDL). It creates the SOAP and XML for you. When you buy Remedy Incident Management module, it includes out-of-the-box web services that will allow it to consume and/or publish web services which make it easy to integrate with other systems on the intranet or externally. There are BMC publications which provide details on ITSM integration --- but again you will need a customer/support ID to get it from BMC's website.
Yes and no to the Web Services integration. The Version 8 system I was working on had some web services available, and they were incomplete. So I was able to do a number of functions (mostly read-only), specifically for custom display and Change Request checking, and submission of a Change Request and a Work Order. But many functions had no web service, and I ended up brute-forcing through the web user interface (with a customized browser control) to change dates on tasks, or make tasks. Ugly, but effective. There are mid-tier JavaScript calls that can be used, if you know the secret function name and can deal with the dynamic naming convention in play. For Remedy users who are desperate for some integration, there are ways it can be done.
few OOTB integrations are possible with BMC Products but if you want to do it with other you have to write webservices(REST or SOAP)
Companies like IBM or cisco has made connectors for integration with Remedy.
Just adding more detail here:
I also do a ton of direct SQL for remedy integration.
If you're careful and know what you're doing, you can have a stored proc create legal/valid records in a remedy table. (If you do it wrong, the records won't load in the client and in older versions of the windows client can actually crash the client software.)

Continuous Integration without the "Build"

Our group uses Visual Sourcesafe as a file repository for all of our "content" (HTML, CSS, Javascript, JSP). None of it requires building or compilation but we would like to automate the copying of it to a Unix dev server upon check-in.
I have used Cruisecontrol.NET in the past for CI at other companies but it was for .NET. What would be the easiest way to achieve our current requirements? Would using CruiseControl.NET be overkill or even a good idea? Thanks in advance.
-Sean
This sounds like overkill for a CI tool.
Visual SourceSafe and other version control systems should have hooks allowing you to automate a simple file copy operation.
From http://msdn.microsoft.com/en-us/library/aa302175.aspx
Use events, such as OnBeforeCheckout
or OnAfterCheckIn to automate your
process.
Whether this makes sense for you depends on a couple of factors. If you are talking about a large, geographically team with only change based deployment then yes, those are valid concerns. If you only have a few local developers and you deploy the world on each copy operation, then no, I don't think you'd need a CI tool.
This is not to say other reasons may influence you to use a CI tool, testing for instance. Your problem might also be solved by running a polling script on the Unix box to sync the source control with the dev server. I guess the main point is, if you are deploying all non-compiled software, why do you have a separate source control and dev server? You're deployment can be done by a source control tool. If it is only for backup, there are plenty of existing solutions for that problem.
Sean,
Our AnthillPro customers do this kind of thing pretty frequently (and we even do it internally when new content is committed for our website). It's a really good idea, totally appropriate for a CI tool, and you can get quality feedback if you wire in some automated functional / regression tests.
Eric
You could try using Hudson http://hudson-ci.org/
It is easy to configure, is completely GUI (unless you want to go into the details), and has a plugin for Visual Sourcecafe http://wiki.hudson-ci.org/display/HUDSON/Visual+SourceSafe+Plugin
While CI would probably be overkill for what you are trying to do, since Hudson is all GUI and easy to use, you would not spend a lot of time just trying to configure it.
Hudson also has plugins for copying stuff over to other systems, and so it would be easy to deploy your content to another system.
If you are worried about the process, get in touch with a hosted CI provider, such as MikeCI, a quick message on their support board will get you the answer. I don't see why triggering a "build" can't be replaced with copy and paste!

Continuous Integration for a small .NET open source project [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 8 years ago.
Questions asking us to recommend or find a book, tool, software library, tutorial or other off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Improve this question
I'm starting a small open source project, myself being the sole contributor for the time. Still, I think a continuous integration setup would be useful to detect whether I broke the build.
Is there a free, hosted continuous integration server that is suitable for very small projects? Googling turned up CodeBetter, but I'm not sure they'll accept a one-man project that is just starting up.
I prefer TeamCity, but I'm open to suggestions.
Note - a hosted solution is a must for me. I don't want to setup and maintain a continuous integration server, so answers like "TeamCity" or "CruiseControl" are simply irrelevant.
Specific requirements:
I am hosting my project at GitHub, so the continuous integration server needs Git integration
I would like the continuous integration server to run .NET integration (unit) tests
Nice to have - I also need access to a MySQL server (although I could modify the tests to use embedded SQLite, they currently run against an external MySQL server).
AppVeyor is well integrated with Github, free for open-source projects and really easy to set up.
Builds are configured using YAML or UI. Free accounts are limited to one build at a time. Deployment to NuGet is supported, as well as project and account feeds. It is deeply integrated with GitHub, for example allows creating releases. It supports build matrices, AssemblyInfo patching, rolling builds, build prioritization, status badges, build notifications etc.
Travis is well-known CI (and seems to be the most popular hosted CI by far), now it supports building C#, F# and VB projects too. The caveat is that it supports only Linux and Mono and it's in beta ("may be removed or altered at any time").
MyGet is a hosted package server, but now it supports Build Services too (currently preview) and other features. It's free for public feeds (500 MB max) and has slightly better features for approved open-source projects (bigger storage and gallery). Build service is optimized for packages: NuGet feed, MyGet feeds, SymbolSource integration etc.
This is now provided by Microsoft for free for teams of up to 5 people by Team Foundation Server.
It provides:
Source Control: TFS, Git
Agile Planning: Agile, Scrum, CMMI
Continuous Builds
Collaboration
Integration
Test Execution
Deployment
Visual Studio Team Services doesn't require hosting code on it, code can be pulled from GitHub or any Git repository.
If the project is small and doesn't have complex requirements to build, Hosted pool can be used to perform CI builds. There're several limitations: available software, one build at a time, time limit of one hour etc. If it isn't enough, you can add your own build agents by running a script on your machines.
GitHub support isn't full (pull requests aren't built, for example), but most functionality is supported. Shields.io doesn't support VSO yet, but a custom shield is available.
The primary drawback for open-source projects is that build logs, test results and other data won't be public. Only five users can be given access to the project on a free account. There's a suggestion on UserVoice to make public projects possible.
I know the thread is quite old, but for the people still looking for the answer I recommend taking a look at AppHarbor
It is pretty easy to setup integration with Github and Bitbucket, and you have basic db connections for free through "addon" options.
Quite convenient for startups.
Also take a look at CodeHaus:
http://codehaus.org/
They use Atlassian's Bamboo CI software.
No opinion - as I've never used it.
I don't think that you will easily find a real free (by this I mean for any project, any language) hosted CI service because such a service is very CPU, RAM, disk intensive which implies specific rules, hardware, pricing.
For some offers, have a look at Outsourcing Continuous Integration or this question here on SO. I didn't look at all solutions in detail so I don't know if they'll meet your requirements (language, tool and pricing).
Or try to join a forge providing Continuous Integration for open source projects like The Codehaus (EDIT: not an option for .NET projects AFAIK) or CodeBetter. This will certainly require some efforts to get your project accepted (few actually are IMHO) but this might be your best option.
I've just started using OnCheckin:
https://oncheckin.com/
They exclusively provide for .NET projects.
Maybe the right answer is for someone to make a set of EC2 images available for this sort of thing, so users can either use Amazon, or build their own cloud on Eucalyptus inside the firewall if they're paranoid... but in either case, you save the time and cost of building those images.
MikeCI is an affordable hosted CI service, from $10 per month you can have a cloud build set up in minutes. It currently supports Ruby, Maven and Ant. It has a Free 30 day trial so you can try it and see what it's like. I personally think it's great, plus I think they're looking to support .Net and Objective C!
here's their site http://www.mikeci.com
I know this is probably an old thread, but
Here's another option:
Checkout Jenkins.
It does supports Jenkins.NET which I'm using right now.
And here's another SO-RELATED-THREAD: TFS 2008/2010 vs Jenkins for Continuous Integration
There's RunCodeAt, which Pascal's comment pointed me to. It is super easy to integrate with github, which I happen to host my project on. I'll give it a try.