I have a web job I've created and I want to deploy to azure. However I'm confused about the configuration when you create it.
It's a web job that's triggered from an azure storage queue. It works fine locally.
However when I go to create the web job in azure, I'm confused by the choices...my choices are Triggered or Continuous.
If I choose Continuous, I get a choice single or Multi.
If I choose Triggered, I'm given a choice of Scheduled or Manual. I don't want a Scheduled, and I'm not sure what Manual means...that doesn't seem like it's right either.
I know the web job that's triggered from the azure queue is really "polling" and not triggered...so it seems like continuous is the right choice. But I'm not sure.
So the question is...when creating a Web Job that's triggered from an Azure Queue, what is the right deploy configuration?
I sounds like you are using the Azure WebJobs SDK. In SDK scenarios, even though your individual functions are 'triggered', the WebJob as a while runs continuously (i.e. your exe keeps running and does its own internal triggering). So what you want is Continuous Multi. No reason to use singleton in most cases, and it's not relevant anyway until you scale out to multiple instances.
Related
I am fairly new to the google-cloud-platform world and I am struggling with my 1st steps there.
So, what I want to know is how to make a webhook app, that will run 24/7 and "catch" data sent from another 3rd party service (later I will try to do something with this data - manipulate it and push into a DB, but that's anohter question to ask).
I have set up an instance on GCP which Linux based, but , what's next?
I am familiar with PHP, but I want to do it this time in Phyton (learning it nowdays).
Which service in GCP should I use and how I set up the server to catch every data the 3rd party service is sending?
This sounds like a perfect fit for Google App Engine. As long as the 3rd-party service makes HTTP requests, App Engine is a great fit. You can write your application in Python, PHP, Java, or just about anything else, then GAE takes care of the rest. No need to manage Linux, instances, firewall rules, or anything else.
If your load is minimal, you may even fit into the free tier and pay nothing to run your app.
Check out the GAE Python docs at https://cloud.google.com/appengine/docs/python/.
If you want to run your Web Hook continuously, then you can run it as a Cron job. Here is a guide on how to run Python scripts as cron jobs on Google App Engine: Scheduling Tasks With Cron for Python
I'm writing an intranet application (in a LAMP environment) that uses data from sections of an MSSQL 2012 database (used by another much larger application).
As I see it my options are to:
Directly query the database from the application.
Create a web service
Use Microsoft SQL Server Integration Services to have the data
automatically integrated into my applications database
I'm sure the best solution here would be using SSIS, however I've not done this before am on a deadline - so if that's the case could someone let me know
a) With my limited experience in that area would I be able to set that up, and
b) What are the pros and cons of the above options?
Any other suggestions outside of the options I've thought of would also be appreciated
Options:
Directly query the database from the application.
Upside:
Never any stale data
Downside:
Your application now contains specific code and is tied that
application
If you are in the common situation where the business
buys another application containing the same master data, you now
need special code to connect to two applications
Vendor might not like it
Might be performance impacts on source application
Use Windows Task Scheduler / SQL Agent to run a script or SSIS to replicate data at x minute intervals or so.
Upside:
Your application is only tied to your local copy of the database, which you can customise as required. If your source app gets moved to the cloud or something then you don't need to make application changes, just integration changes
If another source application appears with the same type of master data, you can now replicate that into your local DB rather than making application changes to connect to 2 databases.
Downside:
Possibility of stale data
Even worse: possibility of stale data without users realising it, with subsequent loss of confidence in the application
Another component to maintain
If you write a batch script, .Net app or SSIS, they are all pieces of logic that needs to be scheduled to run
Another option is to replicate the database using differential replication if your source database is Oracle or SQL, you can use replication to replicate it into another database.
You need to consider where you will be in a few years. The data copy method probably gives you more flexibility to adapt to changes in the source system as you only need to change your integration, not your whole app if something drastic changes with your source system.
You also need to consider: will you ever be asked to propogate changes back the other way, i.e. update data in your local copy and have it pushed back to the source systems.
I am working with Hudson here and I am trying to create a single job that users with different access can run. Based on their access level, they would see different options.
For instance:
A Developer when running this job would see the build stage and be able to see the build process, and deploy it to a development server.
The Release Engineer would see the same options as the developer, but also see that he can deploy the code to a different set of servers as well.
And so forth.
Is this even possible, like role based jobs. I know I can limit the access and who can do what, but this is a little different.
I would like a way for individual users to send a repo path to a hudson server and have the server start a build of that repo. I don't want to leave behind a trail of dynamically created job configuration. I'd like to start multiple simultaneous instances of the same job. Obviously this requires that the workspaces different for the different instances. I believe this isn't possible using any of the current extensions. I'm open to different approaches to what I'm trying to accomplish.
I just want the hudson server to be able to receive requests for builds from outside sources, and start them as long as there are free executors. I want the build configuration to be the same for all the builds except the location of the repo. I don't want to have dozens of identical jobs sitting around with automatically generated names.
Is there anyone out there using Hudson or Jenkins for something like this? How do you set it up? I guess with enough scripting I could dynamically create the necessary job configuration through the CLI API from a script, and then destroy it when it's done. But I want to keep the artifacts around, so destroying the job when it's done running is an issue. I really don't want to write and maintain my own extension.
This should be pretty straightforward to do with Jenkins without requiring any plugins, though it depends on the type of SCM that you use.
It's worth upgrading from Hudson in any case; there have certainly been improvements to the features required to support your use case in the many releases since becoming Jenkins.
You want to pass the repo path as a parameter to your build, so you should select the "This build is parameterized" option in the build config. There you can add a string parameter called REPO_PATH or similar.
Next, where you specify where code is checked-out from, replace the path with ${REPO_PATH}.
If you are checking out the code — or otherwise need access to the repo path — from a script, the variable will automatically be added to your environment, so you can refer to ${REPO_PATH} from your shell script or Ant file.
At this point, when pressing Build Now, you will be prompted to enter a repo path before the build will start. As mentioned in the wiki page above, you can call the buildWithParameters URL to start a build directly with the desired parameter, e.g. http://server/job/myjob/buildWithParameters?REPO_PATH=foo
Finally, if you want builds to execute concurrently, Jenkins can manage this for you by creating temporary workspaces for concurrent builds. Just enable the option
"Execute concurrent builds if necessary" in your job config.
The artifacts will be available, the same as any other Jenkins build. Though probably you want to manage how many recent artifacts are kept; this can be done by checking "Discard Old Builds", and then under Advanced…, you can select enter a value for "Max # of builds to keep with artifacts".
I want to build a web based front-end to manage/administer my Linux box. E.g. I want to be able to add users, manage the file system and all those sorts of things. Think of it as a cPanel clone but more for system admin rather that web admin.
I was thinking about creating a service that runs on my box and that performs all the system levels tasks. This way I can have a clear separation between my web based front-end and the actual logic. The server pages can than make calls to my specialized server or queue tasks that way. However, I'm not sure if this would be the best way to go about this.
I guess another important question would be, how I would deal with security when building something like this?
PS: This just as a pet project and learning experience so I'm not interested in existing solutions that do a similar thing.
Have the specialized service daemon running as a distinct user -- let's call it 'managerd'. Set up your /etc/sudoers file so that 'managerd' can execute the various commands you want it to be able to run, as root, without a password.
Have the web server drop "trigger" files containing the commands to run in a directory that is mode '770' with a group that only the web server user and 'managerd' are members of. Make sure that 'managerd' verifies that the files have the correct ownership before executing the command.
Make sure that the web interface side is locked down -- run it over HTTPS only, require authentication, and if all possible, put in IP-specific ACLs, so that you can only access it from known locations, in advance.
Your solution seems like a very sensible solution to the 'root' issue.
Couple of suggestions:
Binding the 'specialised service' to localhost as well would help to guarantee that requests can't be made externally.
Checking request call functions that perform the actions and not directly give the service full unrestricted access. So calling a function "addToGroup(user,group)" instead of a generic "performAction(command)".