I0T agent performance Query - fiware

Hi I am working on a project where I am calculation throughput of IoT agent. I need throughput 1000/s I wanted to know hardware statistic and configuration detail of IoT agent to achieve throughput 1000/s.

According to the FIWARE Load Test Reports, you can achieve that throughput in Orion with a relatively small environment.
Environment
Orion-LD
Orion-LD-TRoE
tiny
~1800 req/s
~1400 req/s
small
~3500 req/s
~2800 req/s
mid
~12 000 req/s
~9900 req/s
large
~26 000 req/s
~23 000 req/s
Where:
Tiny - 1 CPU / 6 GB RAM Orion-LD, 4 CPU / 16 GB RAM Mongo-DB
Small - 2 CPU / 12 GB RAM Orion-LD, 8 CPU / 32 GB RAM Mongo-DB
Mid - 8 CPU / 48 GB RAM Orion-LD, 30 CPU / 48 GB RAM Mongo-DB
Large - 16 CPU / 96 GB RAM Orion-LD, 60 CPU / 96 GB RAM Mongo-DB
Of course, it may be the IoT Agent which is the bottleneck, so you'll need to investigate that as well, but this should be a reasonable baseline result.

Related

Amazon AWS RDS 120GB free storage suddenly became 0GB within less than 2 hours

I've been using AWS RDS db.m4.large with 128GB storage for MySQL(InnoDB) for more than one year now.
The size of my database is now less than 2GB and the free storage had always been more than 125GB as captured below but it suddenly dropped to 0GB in less than 2 hours 2 weeks ago.
So I upgraded the storage to 250GB but it was dropped to 0GB again two days ago.
Free storage suddenly dropped
So I asked AWS customer center about it and they told me that mysqlInnoDbTablespace took 243GB and I can reduce the size of it by executing OPTIMIZE TABLE on all of my tables.
dataVolumeAvailableSize 43.3 gb,
dataVolumeTotalSize 295 gb
dataVolumeUsedSize 251 gb
mysqlErrorLogFileSize 72.4 kb
mysqlGeneralLogBackupSize 0 bytes
mysqlGeneralLogFileSize 0 bytes
mysqlGeneralLogSize 0 bytes
mysqlInnoDbLogSize 256 mb
mysqlInnoDbTablespace 243 gb
mysqlSlowLogBackupSize 124 mb
mysqlSlowLogFileSize 35.9 kb
mysqlSlowLogSize 4.03 gb
I did it but the free storage hasn't been increased. so I asked them again but they haven't replied my inquiry.
Below is the size of my database. it's less than 2GB.
The size of my database
Did anybody experience this kind of case or can somebody help me to free up the free storage?
Thanks.

How much resources a mysql event use?

Im working on a browser game and I write 7 mysql events for each player.
5 of events update a table row every 5 secs.
And two of them update 2 other tables every sec.
I have a linux vps that its ram is 512 M and cpu has 1 core.
How many online players can support this VPS.
thanks.
Minimum system requirement for mysql for windows (86 and 64) is 800 mb ram and 500 mb hard disk space
Read this article to get some further knowledge.
dev.mysql.com/doc/mysql-monitor/3.0/en/system-prereqs-reference.html

Google Compute Engine VM disk is very slow

We just switched over to Google Compute Engine and are having major issues with disk speed. It's been about 5% of Linode or worse. It's never exceeded 20M/s for writing and 10M/s for reading. Most of the time it's 15M/s for writing and 5M/s for reading.
We're currently running a n1-highmem-4 (4 vCPU, 26 GB memory) machine. CPU & memory aren't the bottleneck. Just running a script that reads rows from PostgreSQL database, processes them, then writes back to PostgreSQL. It's just for a common job to update database row in batch. Tried running 20 processes to take advantage of multi-core but the overall progress is still slow.
We're thinking disk may be bottleneck because traffic is abnormally low.
Finally we decided to do benchmarking. We found it's not only slow but seems to have a major bug which is reproducible:
create & connect to instance
run the benchmark at least three times:
dd if=/dev/zero bs=1024 count=5000000 of=~/5Gb.file
We found it becomes extremely slow and aren't able to finish the benchmarking at all.
Persistent Disk performance is proportional to the size of the disk itself and the VM that it is attached to. The larger the disk (or the VM), the higher the performance, so in essence, the price you are paying for the disk or the VM pays not only for the disk/CPU/RAM but also for the IOPS and throughput.
Quoting the Persistent Disk documentation:
Persistent disk performance depends on the size of the volume and the
type of disk you select. Larger volumes can achieve higher I/O levels
than smaller volumes. There are no separate I/O charges as the cost of
the I/O capability is included in the price of the persistent disk.
Persistent disk performance can be described as follows:
IOPS performance limits grow linearly with the size of the persistent disk volume.
Throughput limits also grow linearly, up to the maximum bandwidth for the virtual machine that the persistent disk is attached to.
Larger virtual machines have higher bandwidth limits than smaller virtual machines.
There's also a more detailed pricing chart on the page which shows what you get per GB of space that you buy (data below is current as of August 2014):
Standard disks SSD persistent disks
Price (USD/GB per month) $0.04 $0.025
Maximum Sustained IOPS
Read IOPS/GB 0.3 30
Write IOPS/GB 1.5 30
Read IOPS/volume per VM 3,000 10,000
Write IOPS/volume per VM 15,000 15,000
Maximum Sustained Throughput
Read throughput/GB (MB/s) 0.12 0.48
Write throughput/GB (MB/s) 0.09 0.48
Read throughput/volume per VM (MB/s) 180 240
Write throughput/volume per VM (MB/s) 120 240
and a concrete example on the page of what a particular size of a disk will give you:
As an example of how you can use the performance chart to determine
the disk volume you want, consider that a 500GB standard persistent
disk will give you:
(0.3 × 500) = 150 small random reads
(1.5 × 500) = 750 small random writes
(0.12 × 500) = 60 MB/s of large sequential reads
(0.09 × 500) = 45 MB/s of large sequential writes

Improving MySQL I/O Performance (Hardware & Partitioning)

I need to improve I/O performance for my database. I'm using the "2xlarge" HW described below & considering upgrading to the "4xlarge" HW (http://aws.amazon.com/ec2/instance-types/). Thanks for the help!
Details:
CPU usage is fine (usually under 30%), uptime load averages anywhere from 0.5 to 2.0 (but I believe I'm supposed to divide that by the number of CPU's) so that looks okay as well. However, the I/O is bad: iostat show favorable service times, but the time spent in queue (I suppose this means waiting to access the disk) is far too high. I've configured MySQL to flush to disk every 1sec instead of every write, which helps, but not enough. Profiling shows there are a handful of tables that are the culprits for most of the load (both read && write operations). Queries are already indexed & optimized, but not partitioned. Average MySQL states are: Sending Data # 45%, statistics # 20%, Updating # 15%, Sorting result # 8%.
Questions:
How much performance will I get by upgrading HW?
Same question, but if I partition the high-load tables?
Machines:
m2.2xlarge
64-bit
4 vCPU
13 ECU
34.2 Gb Mem
EBS-Optimized
Network Performance: "Moderate"
m2.4xlarge
64-bit
6 vCPU
26 ECU
68.4 Gb Mem
EBS-Optimized
Network Performance: "High"
In my experience, the biggest boost in MySQL performance comes from IO. You have alot of RAM. Try setting up a ram drive and point the tmpdir to it.
I have several MySQL servers that are very busy. My settings are below - maybe this can help you tweak your settings.
My Setup is:
-Dual 2.66 CPU 8 core box with a 6-drive Raid-1E array - 1.3TB.
-innodblogs on a separate SSD drives.
-tmpdir is on a 2GB tempfs partition.
-32GB of ram
InnoDB settings:
innodb_thread_concurrency=16
innodb_buffer_pool_size = 22G
innodb_additional_mem_pool_size = 20M
innodb_log_file_size = 400M
innodb_log_files_in_group=8
innodb_log_buffer_size = 8M
innodb_flush_log_at_trx_commit = 2 (This is a slave machine - 1 is not required fo my purposes)
innodb_flush_method=O_DIRECT
Current Queries per second avg: 5185.650
I am using Percona Server, which is quite a bit faster that other MySQLs from my testing.

Heavy mysql usage CPU or Memory

I have an Amazon s3 instance and the project we have on the server does a lot of INSERTs and UPDATEs and a few complex SELECTs
We are finding that MySQL will quite often take up a lot of the CPU.
I am trying to establish whether a higher memory or higher cpu is better of the above setup.
Below is an output of cat /proc/meminfo
MemTotal: 7347752 kB
MemFree: 94408 kB
Buffers: 71932 kB
Cached: 2202544 kB
SwapCached: 0 kB
Active: 6483248 kB
Inactive: 415888 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 168264 kB
Writeback: 0 kB
AnonPages: 4617848 kB
Mapped: 21212 kB
Slab: 129444 kB
SReclaimable: 86076 kB
SUnreclaim: 43368 kB
PageTables: 54104 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
CommitLimit: 3673876 kB
Committed_AS: 5384852 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 180 kB
VmallocChunk: 34359738187 kB
Current Setup:
High-CPU Extra Large Instance
7 GB of memory 20 EC2 Compute Units (8
virtual cores with 2.5 EC2 Compute
Units each) 1690 GB of instance
storage 64-bit platform I/O
Performance: High API name: c1.xlarge
Possible Setup:
High-Memory Double Extra Large
Instance
34.2 GB of memory 13 EC2 Compute Units (4 virtual cores with 3.25 EC2 Compute
Units each) 850 GB of instance storage
64-bit platform I/O Performance: High
API name: m2.2xlarge
I would go for 32GB memory and maybe more harddisks in RAID. CPU won't help that much - you have eough cpu power. You also need to configure mysql correctly.
Leave 1-2 GB for OS cache and for temp tables.
Increase tmp_table_size
remove swap
optimize query_cache_size (don't make it too big - see mysql documentation about it)
periodically run FLUSH QUERY CACHE. if your query cache is <512 MB - run it every 5 minutes. This doesn't clean the cache, it optimizes it (defragment). This is from mysql docs:
Defragment the query cache to better
utilize its memory. FLUSH QUERY CACHE
does not remove any queries from the
cache, unlike FLUSH TABLES or RESET
QUERY CACHE.
However I noticed that the other solution has the half disk space: 850GB, which might be reduced number of hard disks. That's generally a bad idea. The biggest problem in databases is hard disks. If you use RAID5 - make sure you don't use less hard disks. If you don't use raid at all - I would suggest raid 0.
Use vmstat and iostat to find out whether CPU or I/O is the bottleneck (if I/O - add more RAM and load data into memory). Run from shell and check results:
vmstat 5
iostat -dx 5
if CPU is the problem vmstat will show high values in us column, and iostat will show low disk use (util)
if I/O is the problem then vmstat will show low values in us column and iostat will show high disk utilization (util); by high I mean >50%
It depends on the application.
You could use memcached to cache mysql queries. This would ease cpu usage a bit, however with this method you would want to increase RAM for storing the queries.
On the other hand if it's not feasible based on type of application then I would recommend higher CPU.
There are not many reasons for a MySQL to use a lot of CPU: It is either processing of stored routines (stored procedures or stored functions) or sorting going on that can eat CPU.
If you are using a lot of CPU due to stored routines, you are doing it wrong and your soul cannot be saved anyway.
If you are using a lot of CPU due to sorting going on, some things can be done, depending on the nature of your queries: You can extend indexes to include the ORDER BY columns at the end, or you can drop the ORDER BY clauses and sort in the client.
What approach to chose depends on the actual cause of the CPU usage - is it queries and sorting? And on the actual queries. So in any case you will need better monitoring first.
Not having monitoring information, the general advice is always: Buy more memory, not more CPU for a database.
Doesn't the on-demand nature of EC2 make it rather straightforward to rent the possible setup for a day, and do some load testing? Measurements speak louder than words.
Use "High-CPU Extra Large Instance".
In your current setup, MySQL is not constrained by memory:
MemTotal: 7347752 kB
MemFree: 94408 kB
Buffers: 71932 kB
Cached: **2202544 kB**
Out of 7 GB memory, 2 GB is unused and being used by OS as I/O cache.
In this case, increasing CPU count would give you more bang for buck.