Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
Is it possible to use Google Cloud Functions to ssh into a on-prem vm?
specifically can I give the Cloud Functions a private key to use?
I am thinking of running a timer job that will get data from a vm not in GCP with a python script.
is it possible?
is there a reason i shouldn't do this?
I have not been able to test this because I don't have access to GCP yet and I have not been able to find any documentation that mentions this. I do realize Functions might not be able to do this which is why I am curious if anyone has tried this in the past.
From the best of my knowledge - it is possible. Your code in the cloud function - just an ordinary code. If you use python, for example, you can use 'requests' or 'paramico' or 'pysftp' or any other library. No (to be precise - very little) restrictions.
You can use a private key for that purpose. I would suggest to store the private key in the Secret Manager, so it is retrieved in the runtime (you need to write code for that retrieval).
Be aware, however, that the cloud function are restricted by maximum 2Gb of memory (shared between RAM and 'fake' local drive, so you can use '/tmp' directory as if you have a local drive); and restricted by 540 seconds (9 minutes) timeout. Thus you need to 'push' all you would like to do into those boundaries.
In addition, access to an external IP address might be whitelisted by the external party. You may need to use some additional network configuration so that all 'calls' from your function are originated from one dedicated IP address. That is possible as well.
For a timer - you may use a Cloud Scheduler, which can send a message into a Pub/Sub topic according to your cron timetable. The cloud function is to be on the other side of the Pub/Sub.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I'm new with FIWARE and I'm disoriented by the large amount of information related to this platform, the amount of components called Generic Enablers that exists. I don't know where to start.
Any advice?
Thanks!
The best place to start would be the FIWARE Developer's Website - this gives you an overview as to what FIWARE is and how the various enablers fit together
Core Context Management holds the state of the system and allows other elements to subscribe to changes of state
IoT and Robots provides sensor data and transforms it into the NGSI standard interface
Processing/Analysis/Visualization provides dashboards and complex event processing
The pillar on the right - API management adds security, as well as interface to systems to publish data externally.
Deployment tools helps the instantiation of a FIWARE system easier
If you are looking for more information, the Tour Guide describes each enabler in depth and holds a series of link to Videos, User Guides, presentation and so on.
If you learn-by-doing the Tutorials will present a series of exercises to build up a "Powered by FIWARE" application - it also describes how various Generic enablers fit together. The first six tutorial concentrate on the use of the Orion Context Broker which forms the central block of FIWARE.
Basically the the NGSI v2 standard provides a common interface to allow a series of disparate building blocks to talk to each other using a common language, these are things you're probably going to need in a generic Smart application, but are not unique to your application - you would be providing the "special sauce" by either creating custom sensors sending context data into the system (i.e. the block at the bottom), or complex processing algorithms which are reading and altering the context state (i.e. the block at the top).
If you want to speed development you can buy-not-build and use the existing free open source generic enablers. The whole system is modular so its easier to experiment and add and refactor things as necessary.
In addition to Jason Fox answer,
Every year the FIWARE foundation organizes summits, that are pretty good. There are tutorials and hands on sessions, and all slides I think that are available here. It could also be a good starting point
On the other hand, most of the FIWARE software components (GEs), are available in Docker HUB. Therefore, if you feel comfortable using Docker you could set up a bunch of FIWARE GEs in a few minutes.
Finally, there is this FIWARE-IOT-Stack website, where you can find a kind of FIWARE architecture for IoT. I do not know if it's official, but in my case it was very useful.
Regards!
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
I have scenario for which I am looking for a message queue service which supports below:
Ease to Use
Very high in performance
Message once read shouldn't be available for other consumers.
Should have capability to delete the message once read.
Message once published should not get dropped.
The scenario which I have is described below:
There are many publishers.
There will be many consumers.
Queuing server and consumers residing on same machine, but publishers are residing on different machines.
Please let me know best queuing service apart from Rabbitmq and sqs satisfying above points
I would recommend Apache Kafka: http://kafka.apache.org/
If you want to know a comparison between Kafka and RabbitMQ you should read this article: http://www.quora.com/RabbitMQ/RabbitMQ-vs-Kafka-which-one-for-durable-messaging-with-good-query-features
Also, you should take a look to this: ActiveMQ or RabbitMQ or ZeroMQ or
Kafka as far I know is mainly meant for real data propagation and I think my requirement doesn't require something like kafka. I have used SQS but the only problem I have with SQS is high latency. Publisher pushed message to queue and consumer keep on polling for new message, this implementation is hitting me with very high latency. My requirement is simple as follows:
Queue service should have high availability and reliability like SQS
Latency should be very high lets say not more than 10ms. (here 10ms includes publishing and receiving the message).
Also my message size is very small say not more than 20-30 bytes.
I have thought of using redis, in which I will be pushing the messages to a list and workers will keep on popping them back to back till list becomes empty, but I have not done any benchmarking on that. So here I really need suggestion so I go in right direction.
Thanks,
For some of my system integration projects I met the MQ-tasks. Several rich costumers wants the production solutions like IBM WebSphere MQ, but I think it's too much expressive and difficult.
I found and used the simple and stable analog: e-mail server.
All integrated systems got the local e-mail boxes. Messages are e-mail, with command-code in subject and json in attachments. E-mail server listen and dispatch all queues, to recipients or to groups of them. E-mail protocols are stable and all developers know a lot of tools to work with it. Sysadmins and testers use the simple e-mail clients for testing and auditing. All e-mail servers have a logging tools.
It's best and easy solution, and I suggest it for most of integration projects.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
I'd like some advice as to the best server-side code that can handle real time data from devices and make decisions based on inputs. A simple example: Suppose I have a web-enabled thermometer, running a light TCP/IP client stack. When the temperature gets to 30 degrees, I want the device to contact the server, and then I want the server to send me an email. I also want the server to be able to send a command to turn on a heater.
The issue at hand here is the ability to start a TCP message from the server, and get through an assortment of arbitrary firewalls and routers, all the way down to the client device. I know that there are 'workarounds' like polling the server for updates, or 'long polling' where I call up to the server, and keep a connection open in case it has something to send. The problem here is bandwidth. Messages are rare, but important, so the headers and handshaking make up 98% of the traffic.
I've been reading up on WebSockets, and it seems like they are exactly what I need, especially when paired with HTML5.
Does anyone know of a ready-to-go server software package that could run on a cloud server, and push data down to my devices using some standardized methods? I really don't want to reinvent the wheel here, and I can't believe I'm the first to try this. I see a few folks doing it with their own proprietary solutions, but I'm more interested in buying a one-stop package.
WebSocket is a valid choice for connecting embedded devices to backend infrastructure due to it's low overhead, low latency and compatibility with Web and general network infrastructure. There is a broad range of server implementations available, i.e. Jetty, node.js based etc.
As an example, here is a demo connecting an Arduino device to a WebSocket server and a browser client showing real-time data in a chart:
https://github.com/tavendo/AutobahnPython/tree/master/examples/wamp/serial2ws
http://www.youtube.com/watch?v=va7j86thW5M
The technology used there, AutobahnPython, is a Python/Twisted based WebSocket implementation that
provides server and client implementation
directly runs on embedded devices like RasperryPi
makes it easy to access sensors connected via serial or CANbus (since Twisted supports that very well)
provides RPC and PubSub messsaging patterns on top of WebSocket
The tech is open-source, so you can roll your own solution. If you look for help/services to get it done for you, contact me;) We also provide Tavendo WebMQ, a virtual appliance (VMware, EC2) which adds features, management UI etc and also includes a REST API to push data to WebSocket clients.
Disclaimer: I am author of Autobahn and work for Tavendo.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
Looking for an issue tracker for a medium-sized web application open project with a distributed team. We are planning to run this on our own server. It must be very easy for new users to submit new issues, and it must integrate well with other software.
Our major requirements, in descending order of importance:
open source
capable of very new-user-friendly bug submit
submitting new issue must be as easy as possible, with only a single screen to fill out (after registration), and few fields visible (e.g. just "summary" and "description" would be good)
Google Code is an example of the sort of interface we like; Bugzilla's Bugzilla instance (https://bugzilla.mozilla.org/enter_bug.cgi) is an example of the sort of new bug submit interface that we would NOT like
it's fine if the default submit interface is not new-user-friendly as long as this is easily modifiable using templates/skins. It would be great to have an "advanced view" for bug editing with additional fields (such as who the issue is assigned to), in addition to the simple view for new user bug submission
has API; or, supports other applications concurrently accessing its db backend (we want to query and modify the issues from other, separate software running on another server)
Other desirable criteria, in descending order of importance:
not frustrating in daily use
has a relatively large community
integrates well with hg (mercurial)
amenable to integration with external:
support desk/request tracking software
project management software
auth systems (and/or supports OpenID login)
modular; if we modify the issue tracker, we want to release those improvements as a module that is easy for others to install
amenable to having some sort of simple, easy-to-use issue importance voting system, e.g. stars on Google code; we intend create or modify such a component to plugin to our own external voting system
amenable to integration with SugarCRM
When I say "amenable to", I mean that we are willing to code an extension to the issue tracker ourselves if necessary, however, the issue tracker's architecture should be amenable to that sort of extension.
Issue trackers which also include support desk or project management features are a plus provided that we can choose to integrate external software instead of using the included stuff. We don't need another wiki (we already have one that we like).
According to Google searches (see the comments), the most popular open source issue trackers are trac, bugzilla, mantis, RT (and possibly Launchpad's). I've also included Redmine because I've never seen a recent comparison between any of these issue trackers and Redmine in which someone had something bad to say about Redmine, and on polls Redmine sometimes beats these others. Feel free to suggest others (bearing in mind that one of the criteria is "relatively large community").
There are undoubtedly multiple good issue trackers out there; many of those listed above claim to be extensible and integrable with other software. What would be most helpful would be direct comparisons between issue trackers by people who have used more than one.
How do these compare to each other on extensibility, integratability, and skinnability?
If you have used more than one of these, which of them would you recommend, and which others have you used?
Which of these are already integrated with a large number of auth systems/support desk systems/etc?
Comments explaining why a particular popular open-source issue tracker (especially one of those listed above) is NOT suitable for our situation are very welcome; this will save me time.
thanks!
Redmine. Been using for a while. Simply excellent.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
Is it risky to have dependency on a open source service?
Unlike a open source dll or componenent the service obviously needs to be constantly running, therefore is it a business risk to rely on it? What happens if the open source service disappears for whatever reason?
The service under question would not be used for a business critical application but if successfull it will obviously gain in importance?
Many Thanks
If it's really "Open Source" (as opposed to merely free), you can download the source and run it yourself it the original provider goes away. Of course, you'd want to download the source ahead of time, because if the service provider goes away, there's not guarantee that there will be a site to download it from. Also, you'd probably want to keep backups of the data for yourself if you can.
But if you're misusing the term "open source" to mean a free service like the Google Maps API, then yeah, if it goes away, you're boned. But if Google Maps goes away, so is half the net.
What exactly is an "open source service"?
Any old website that offers an API? Yup, depending on it is a risk - they could go under or start charging a fee
Or a site that publishes the software it's running under an open source license? Just download a copy and if the site goes away, you always have the option to run it yourself
The better question is this:
What happens if your paid enterprise you rely on goes under, and you're left without any code whatsoever, and no support?
With that in retrospect, Open-source guarantees a future. All you have to do is find somebody to hack it. Proprietary on the other hand, legal hilarity ensues.
IMHO, the same as a closed source service.
Both, usually, have the same chances of being closed, with the usual surprises of course, as also Google and Microsoft close services without any previous notice.
Same as Paul says, you can run that service if it gets very important, if it closes, or you need big things of it.
But most important thing, appart from being open or closed source, is the access to your data... in case the service closes or you need to move away... will you have access to all your raw data for moving?
Probably yes. But if it is not a mission critical application, it might be okay.
I personally would try to avoid it just because of its vague future. But you never really know whether a commercial service will live through next year.
Just don't bind tightly to this service and not design strictly for it. Design so as to facilitate switch to another similar service in the future or even to a very different approach.
Design for the family of similar services. And always think of an escape plan in case this service goes away or even all services of the class.
I've also had similar considerations about this service: http://www.webservicex.net
Seems to be freely accessible but who really runs it and who can guarantee it will be there tomorrow?
As for tomorrow, even Google Mail happens to be down at some days. What do you want then of a free open-source service? :)