how to make a time-series forecast using RNN with raspberry pi? I'm doing an IoT project and I want to make time-series predictions using recurrent neural networks running on raspberry pi hardware. but I'm not finding models for this type of project.
Related
I have a huge dataset in AWS S3 that I can't load into my computer to train a neural network. I want to use Pytorch Lightning to train that NN with that dataset.
My question is, how can I use data modules (or any other tool from Pytorch Lightning) in order to load batches from that dataset, preprocess them and feed them into the training loop?
Keep in mind that my filesystem is not big enough to hold the data, so I guess caching is out of the picture. However, it would be interesting to load several batches in parallel so that data loading is not a bottleneck.
in my setup an IoT device is connected to an MQTT broker and publishes measurements. We duplicate this traffic to another PC where we want to perform analytics on the MQTT data. We cannot create a new client to this broker and subscribe to the topics, we just want to implement a sort of sniffer for these messages and extract the measurements as a JSON.
I have experimented with scapy and various python scripts but haven't succeeded. For example, it seems that the mqtt-paho library for python requires a connection to the actual broker, but as I said this is not an option. Any idea how to approach the problem?
Good morning, everyone,
As part of the migration from a heavy client project to a connected
application to a server (currently under study DataSnap XE10.2) in order to transfer on an ad hoc basis and retrieve information from the server.
We would like to have some feedback on other available technologies,
their durability and ease of adaptation.
Here is the profile of our application
The client connects to a remote server that can be hosted elsewhere.
There can be up to 300 clients connected at the same time over a period of 3 days.
these 300 customers can send on a variable hourly interval (1 to 2 hours and in a different way.
depending on the time of day (different countries).
These connections can transmit up to 5000 data so 300 = 1,500,000 over a period of one month.
For the moment we have chosen the DataSnap solution because it is already used on medical applications.
and especially for its ease of migration from the Delphi heavy client project to this type of architecture.
and also for his perenity with Delphi.
Our questions: what do you think?
What arguments and intermediate or other solutions do you propose? As far as RAD Server is concerned, this has a cost per license, but does it exit it examples of migration from a DataSnap application to RAD Server?
What are your experiences in these different areas? (concrete case in point)
On our side we will launch a simulation of 300 clients transmitting 5000 requests JSON REST to our DATASNAP server which will insert each of these queries into a database.
MySQL of 40GB, the insertion will return an acknowledgement of receipt and a written acknowledgement (simple boolean)
Thank you for your feedback, on our side we will publish the results of our tests
There are several solutions, but I recommend our Open Source mORMot framework.
Its SOA is based on interface type definitions, it is REST/JSON from the ground up, and was reported to have very good performance and stability, especially in respect to DataSnap. It is Open Source and work with both Delphi and FPC (also under Linux) - so could be considered as a safer solution for the middle/long term. DataSnap didn't evolve much since years, and I don't understand the RAD Server "black box" approach.
About migrating an existing database or system, check this blog article which shows some basic steps with mORMot.
You have other bricks available, like an ORM, a MVC layer for dynamic web site generation, logging, interface stubbing, high performance database layer, cross-platform clients, an exhaustive documentation and a lot of other features.
We are developing a FiWare city sensor network that:
Inside the sensor processes data in real time and publishes their average every N minutes to our server;
some server side math to do with those reported averages, which will generate new fields or averages of already reported fields (e.g. average by day);
In the end, there will be a Wirecloud component showing a map with the location of every sensor and a plot showing the several fields acquired, by sensor.
Aditionally, sensors can raise alarms and every server and sensor access must be secure and server database scalability it's a future concern. At the moment we have this architecture (OCB stands for Orion Context Broker):
Where the "Webservice" and "Processing" components are house made, but after reading a little bit more about FIWare components (particulary the IOT stack) I've realised that there are more components that we might integrate in here.
What are you using for a solution like this? It looks fairly generic (secure attributes publish, storage, post-processing and value plot).
I have the following scenario where a company has two regions on Amazon cloud, Region 1 in US and Region 2 in Asia. In the current architecture AWS DynamoDB and MySQL-RDS solution are used and installed in the US region. The EC2 servers in Asia regions which hold the business logic has to access DynamoDB and RDS in the US region to get or update data.
The company wants to now install DynamoDB and MySql-RDS in the Asia region to get better performance, so the EC2 servers in Asia region can get the required data from the same region.
The main issue now is how can we sync the data between the two regions, the current DynamoDB and RDS don't inherently support multiple regions.
Are there any best practices in such a case?
This is a big problem when the access is from different geographies.
RDS off late has some support for cross-region "read" replicas. Take a look here. http://aws.amazon.com/about-aws/whats-new/2013/11/26/announcing-point-and-click-database-replication-across-aws-regions-for-amazon-rds-for-mysql/
Dynamo DB doesn't have this. You might have to think of partitioning your data (keep Asia data in Asia and US data in US). Another possibility is to increase the speed by using an in-memory cache. Don't access Dynamo DB always for all the reads: After every successful read, cache the object in AWS Elasticache - setup this cache to be near the required regions (you will need multiple cache clusters). Then all the reads will be fast (since they are now region local). When the data changes (write) then invalidate the object in the cache as well.
However this methods only speeds up the reads (but not writes). Typically most apps will be OK with this.