Problem
I've been working with software called Leaflet, PostGRES and Geoserver, which I use to load in milions of datapoints... Well... that's what I'm supposed to do. Loading these points using GeoJSON and Javascript utilizes the RAM memory. Exceeding a certain amount of points actually crashes the web application. In Taskmanager I can see that Google Chrome uses more RAM, but there's still some available RAM left.
I am aware of the fact that some applications can utilize a set amount of RAM.
In short
Can I dedicate a set amount of RAM that Google Chrome can utilize. For example, 4GB of RAM.
Related
I have one VM Compute Engine to host simple apps. My apps is growing and the number of users too.
Now my users work basicaly from 08:00 AM to 07:00 PM, in this period the usage os CPU and Memory is High and the speed of work is very important.
I'm preparing to expand the memory and processor in the next days, but i search a more scalable and cost efective way.
Is there a way for automatic add resources when i need and reduce after no more need?
Thanks
The cost of running your VMs is directly related to a number of different factors i.e. the type of network in use (premium vs standard), the machine type, the boot disk image you use (premium vs open-source images) and the region/zone where your workloads are running, among other things.
Your use case seems to fit managed instance groups (MIGs). With MIGs you essentially configure a template for VMs that share the same attributes. During the configuration of your MIG, you will be able to specify the CPU/memory limit beyond which the MIG autoscaler will kick off. When your CPU/memory reading goes below that threshold, MIG scales your VMs down to the number of instances specified in your template.
You can also use requests per second as a threshold for autoscaling and I would recommend you explore the docs to know more about it.
See docs
Im building an application with Electron. My application has hundreds of videos that are activated using the basic HTML5 video play and pause functions. The app sometimes uses as much as 8gb of memory. This is fine for me because I have 16gb of ram, but I am unsure what would happen on a computer with less ram.
Would my application crash on a system with less ram, or does Chrome automatically delete videos out of the memory to make space? If so, how does it choose which videos to delete? Is this what is known as "garbage collection"?
I'm looking to buy a chromebook and install either Ubuntu 14 or Ubuntu 16 on it. I looked at the unity specs, and did some research, but it doesn't appear certain unity will run.
I'm wondering, what specs for a chromebook will I need to run the unity GUI interface on it to do some light development work?
Further, Is dual-core processor enough to run unity or do I need quad-core cpu? Do I need 4gb of ram, or more?
Also, if you can recommend one that will work for this need?
Thank you
You'll need at least, and probably more than, 4GB of RAM in order to use Unity effectively in a Linux environment.
Dual core should be enough for things to run, however, everything is going to be more responsive if you're using a quad core system.
You will need to get the best graphics hardware you can find, Intel HD may work but I would be more optimistic about a Tegra GPU being capable of running Unity. Graphics drivers will probably be a hurdle here.
A Chromebook is going to run out of disk space very quickly. Unity itself takes around 2.5 GB after it is installed, and each game project, depending on graphics and audio resources is going to consume disk space very quickly. 32GB hard drive would be the absolute minimum, and I can still foresee the inevitably full hard drive causing issues.
Ultimately I would suggest finding a laptop with higher specs than a typical Chromebook if you're serious about using Unity on it.
My best advice here, though, is don't buy a Chromebook for this purpose unless you're confident in the retailer being open-minded about returns.
I'm building a one-off smart-home data collection box. It's expected to run on a raspberry-pi-class machine (~1G RAM), handling about 200K data points per day (each a 64-bit int). We've been working with vanilla MySQL, but performance is starting to crumble, especially for queries on the number of entries in a given time interval.
As I understand it, this is basically exactly what time-series databases are designed for. If anything, the unusual thing about my situation is that the volume is relatively low, and so is the amount of RAM available.
A quick look at Wikipedia suggests OpenTSDB, InfluxDB, and possibly BlueFlood. OpenTSDB suggests 4G of RAM, though that may be for high-volume settings. InfluxDB actually mentions sensor readings, but I can't find a lot of information on what kind of resources are required.
Okay, so here's my actual question: are there obvious red flags that would make any of these systems inappropriate for the project I describe?
I realize that this is an invitation to flame, so I'm counting on folks to keep it on the bright and helpful side. Many thanks in advance!
InfluxDB should be fine with 1 GB RAM at that volume. Embedded sensors and low-power devices like Raspberry Pi's are definitely a core use case, although we haven't done much testing with the latest betas beyond compiling on ARM.
InfluxDB 0.9.0 was just released, and 0.9.x should be available in our Hosted environment in a few weeks. The low end instances have 1 GB RAM and 1 CPU equivalent, so they are a reasonable proxy for your Pi performance, and the free trial lasts two weeks.
If you have more specific questions, please reach out to us at influxdb#googlegroups.com or support#influxdb.com and we'll see hwo we can help.
Try VictoriaMetrics. It should run on systems with low RAM such as Raspberry Pi. See these instructions on how to build it for ARM.
VictoriaMetrics has the following additional benefits for small systems:
It is easy to configure and maintain since it has zero external dependencies and all the configuration is done via a few command-line flags.
It is optimized for low CPU usage and low persistent storage IO usage.
It compresses data well, so it uses small amounts of persistent storage space comparing to other solutions.
Did you try with OpenTSDB. We are using OpenTSDB for almost 150 houses to collect smart meter data where data is collected every 10 minutes. i.e is a lot of data points in one day. But we haven't tested it in Raspberry pi. For Raspberry pi OpenTSDB might be quite heavy since it needs to run webserver, HBase and Java.
Just for suggestions. You can use Raspberry pi as collecting hub for smart home and send the data from Raspberry pi to server and store all the points in the server. Later in the server you can do whatever you want like aggregation, or performing statistical analysis etc. And then you can send results back to the smart hub.
ATSD supports ARM architecture and can be installed on a Raspberry Pi 2 to store sensor data. Currently, Ubuntu or Debian OS is required. Be sure that the device has at least 1 GB of RAM and an SD card with high write speed (60mb/s or more). The size of the SD card depends on how much data you want to store and for how long, we recommend at least 16GB, you should plan ahead. Backup batter power is also recommended, to protect against crashes and ungraceful shutdowns.
Here you can find an in-depth guide on setting up a temperature/humidity sensor paired with an Arduino device. Using the guide you will be able to stream the sensor data into ATSD using MQTT or TCP protocol. Open-source sketches are included.
I'm setting up OpenShift Enterprise 2 and I'd like to create a district with a larger gear size. Changing
/etc/openshift/resource_limits.conf
on the nodes is straightforward for increasing memory and disk available to the gear, but CPU resource management is less intuitive (from resource_limits.conf):
# cpu cpu_rt_period_us=100000 cpu_rt_runtime_us=950000
cpu_shares=128
cpu_cfs_quota_us=100000
By default, a gear can only consume a maximum of 100% of a single processor core. If I want to allow a bigger gear size that could allow full utilization of 2 processor cores, how would I do that, or is it currently not possible at all in OpenShift?
Since all the gears are the same, and since 'cpu_shares' are compared on a relative basis when restricting a group, I'm not sure it makes sense to change 'cpu_shares'.
However, 'cpu_cfs_quota_us' looks like it might be the right knob to turn. From this page:
https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Resource_Management_Guide/sec-cpu.html
It appears that I should be able to double the quota to get a full 2 cores. However, it's not clear whether OpenShift will respect this, since the 'cpu_cfs_period_us' parameter is not even found in resource_limits.conf.
I performed an experiment using 'stress'. I first confirmed that I could load 2 cores under a normal ssh login (using 'stress --cpu 2'). Then I logged in to a gear on that host and ran the same thing. With cpu_cfs_quota_us=100000, I can only consume a max of 50% CPU for each stress process. But when I change to cpu_cfs_quota_us=200000, I can consume over 99% for each process, so it appears that it is now successful. Would be nice if this was called out in the OpenShift docs...