PHP 7 Performance - mysql

I've tried to reproduce this benchmark which compares PHP 7 with older versions on a Wordpress server: http://talks.php.net/oz15#/wpbench
My configuration is nearly the same, the server has an i7, SSD, 16GB RAM and debian. The server software is nginx. Suprisingly my results differ a lot from the ones linked above.
In my tests Siege (https://www.joedog.org/siege-home/) outputs the following:
For PHP 7.0.0RC1:
siege -c100 -r100 http://10.22.255.133/wordpress/
** SIEGE 3.0.8
** Preparing 100 concurrent users for battle.
The server is now under siege.. done.
Transactions: 10000 hits
Availability: 100.00 %
Elapsed time: 131.61 secs
Data transferred: 95.77 MB
Response time: 0.75 secs
Transaction rate: 75.98 trans/sec
Throughput: 0.73 MB/sec
Concurrency: 56.98
Successful transactions: 10000
Failed transactions: 0
Longest transaction: 1.01
Shortest transaction: 0.04
For PHP 5.6.12:
siege -c100 -r100 http://10.22.255.133/wordpress/
** SIEGE 3.0.8
** Preparing 100 concurrent users for battle.
The server is now under siege.. done.
Transactions: 10000 hits
Availability: 100.00 %
Elapsed time: 63.41 secs
Data transferred: 95.77 MB
Response time: 0.03 secs
Transaction rate: 157.70 trans/sec
Throughput: 1.51 MB/sec
Concurrency: 4.45
Successful transactions: 10000
Failed transactions: 0
Longest transaction: 0.63
Shortest transaction: 0.01
When looking at the transaction rate you can see, that PHP 5 is about two times faster than PHP 7. I can't believe that.
Another interesting fact is, that running this benchmark (http://www.php-benchmark-script.com/) results in PHP 7 being about 3 times faster than PHP 5 (of course on the same server where I've also tested Wordpress). The measured results were:
PHP 7.0.0RC1 | PHP 5.5.28
Math: 0.201 | 0.683
String Manipulation: 0.271 | 0.77
Loops: 0.166 | 0.486
If Else: 0.12 | 0.295
I've uploaded both phpinfo() files in case that helps:
PHP Version 7.0.0RC1: http://simsso.de/downloads/stackoverflow/php7.html
PHP Version 5.6.12-0+deb8u1: http://simsso.de/downloads/stackoverflow/php5.html
Do you have any idea why PHP 7 is that much slower in my tests with Wordpress?
With opcache enabled PHP 7 is actually twice as fast as PHP 5. Thanks Mjh for your hint!
I've made the following measurements on a randomly filled WordPress Server.
Siege now outputs the following for PHP 7.0.0RC1:
Transactions: 10000 hits
Availability: 100.00 %
Elapsed time: 62.14 secs
Data transferred: 604.20 MB
Response time: 0.02 secs
Transaction rate: 160.93 trans/sec
Throughput: 9.72 MB/sec
Concurrency: 3.77
Successful transactions: 10000
Failed transactions: 0
Longest transaction: 0.41
Shortest transaction: 0.01
And PHP 5.6.12:
siege -c100 -r100 http://10.22.255.133/wordpress/
** SIEGE 3.0.8
** Preparing 100 concurrent users for battle.
The server is now under siege.. done.
Transactions: 10000 hits
Availability: 100.00 %
Elapsed time: 119.98 secs
Data transferred: 604.20 MB
Response time: 0.60 secs
Transaction rate: 83.35 trans/sec
Throughput: 5.04 MB/sec
Concurrency: 49.86
Successful transactions: 10000
Failed transactions: 0
Longest transaction: 4.06
Shortest transaction: 0.04

According to the output of phpinfo you posted, opcache isn't enabled for your PHP 7, while it is for PHP 5. That alone can amount for a huge difference.

I currently have the same surprising results on the CLI side.
One of my old projects uses a PHING build. It was running on PHP 5.3 then PHP 5.6.
I tried using PHP 7 and noticed a huge difference. So i decided to time the script execution.
FYI it is a real life projects with thousands of files processed during the build.
Build using PHP 5.3.29:
3 minutes and 44 seconds elapsed.
Build using PHP 7.2.11:
11 minutes and 41 seconds elapsed.
I noticed the CLI did not have opcache activated, here is the results with opcache:
Build using PHP 7.2.11 + opcache:
12 minutes and 18 seconds elapsed.
Yes, WORSE
FYI:
$ php --info |grep opcache
opcache.blacklist_filename => no value => no value
opcache.consistency_checks => 0 => 0
opcache.dups_fix => Off => Off
opcache.enable => On => On
opcache.enable_cli => On => On
opcache.enable_file_override => Off => Off
opcache.error_log => no value => no value
opcache.file_cache => no value => no value
opcache.file_cache_consistency_checks => 1 => 1
opcache.file_cache_only => 0 => 0
opcache.file_update_protection => 2 => 2
opcache.force_restart_timeout => 180 => 180
opcache.huge_code_pages => Off => Off
opcache.inherited_hack => On => On
opcache.interned_strings_buffer => 8 => 8
opcache.lockfile_path => /tmp => /tmp
opcache.log_verbosity_level => 1 => 1
opcache.max_accelerated_files => 10000 => 10000
opcache.max_file_size => 0 => 0
opcache.max_wasted_percentage => 5 => 5
opcache.memory_consumption => 128 => 128
opcache.opt_debug_level => 0 => 0
opcache.optimization_level => 0x7FFFBFFF => 0x7FFFBFFF
opcache.preferred_memory_model => no value => no value
opcache.protect_memory => 0 => 0
opcache.restrict_api => no value => no value
opcache.revalidate_freq => 2 => 2
opcache.revalidate_path => Off => Off
opcache.save_comments => 1 => 1
opcache.use_cwd => On => On
opcache.validate_permission => Off => Off
opcache.validate_root => Off => Off
opcache.validate_timestamps => On => On
Btw, I have to say I never noticed a huge difference on prod with apache when switched from PHP 5 to PHP 7. Despites all of the bechmarks we see online, the difference is far from obvious.
Neddless to say, for that project, I will stick to PHP 5 version.

Related

Laravel - Chunk update too slow (SQLSRV - MYSQL)

I have a function in controller that selects the entire table Clients of a SQL Server database (remote, with less than 1 ms connection), and updates the Clients table in my MySQL DB, there are approx. 15000 rows.
When it was on development in my computer (Windows 10), used to take 24-26 seconds to complete the update. Now in the production server, it takes between 8 and 10 minutes. I tested the connection between my server and the SQL Server and is really fast, so the problem is not the SQL driver but MySQL.
Server: Ubuntu 18.04 LTS, CPU usage2.13% of 4 CPU(s), Memory usage 79.13% (6.26 GiB of 7.91 GiB)
MySQL Ver 5.7.32
Mytop: 50 qps (now) when running the fuction, Slow qps: 0.0, Key efficiency 50% (it was 97% before restarting MYSQL service)
Code
ClienteAsesorSql::where('Concepto', 'NOT LIKE', '%RESERVADO%')
->chunk(500, function ($clientes_asesor) {
foreach ($clientes_asesor as $clientes) {
$cliente_opyme = Cliente::updateOrCreate(['cli_id' => $clientes->ClienteId],[
'cli_id' => $clientes->ClienteId,
'cli_nombre' => $clientes->Concepto,
'cli_email' => $clientes['e-mail'],
'cli_telefono' => $clientes->Telefono,
'cli_movil' => $clientes->SMS,
'cli_domicilio' => $clientes->DomicilioNoFiscal,
'created_at' => $clientes->FechaEstado
]);
$cliente_opyme ->save();
}
});

qemu kvm vm can't access network with ovs dpdk

i'm using ovs with dpdk to improve network performance, but i can't resolve the problem by my self
DPDK dev bind script output
# dpdk-devbind.py -s
Network devices using DPDK-compatible driver
============================================
0000:07:00.0 'I350 Gigabit Network Connection 1521' drv=igb_uio unused=igb,vfio-pci,uio_pci_generic
0000:07:00.1 'I350 Gigabit Network Connection 1521' drv=igb_uio unused=igb,vfio-pci,uio_pci_generic
Network devices using kernel driver
===================================
0000:01:00.0 'Ethernet Controller 10-Gigabit X540-AT2 1528' if=eno1 drv=ixgbe unused=igb_uio,vfio-pci,uio_pci_generic *Active*
0000:01:00.1 'Ethernet Controller 10-Gigabit X540-AT2 1528' if=eno2 drv=ixgbe unused=igb_uio,vfio-pci,uio_pci_generic
No 'Baseband' devices detected
==============================
No 'Crypto' devices detected
============================
No 'Eventdev' devices detected
==============================
No 'Mempool' devices detected
=============================
No 'Compress' devices detected
==============================
No 'Misc (rawdev)' devices detected
===================================
ovs config
# ovs-vsctl --no-wait get Open_vSwitch . other_config
{dpdk-init="true", dpdk-lcore-mask="0x6", pmd-cpu-mask="0x24"}
cpu info
# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 24
On-line CPU(s) list: 0-23
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 62
Model name: Intel(R) Xeon(R) CPU E5-2630 v2 # 2.60GHz
Stepping: 4
CPU MHz: 2965.447
CPU max MHz: 3100.0000
CPU min MHz: 1200.0000
BogoMIPS: 5199.97
Virtualization: VT-x
L1d cache: 384 KiB
L1i cache: 384 KiB
L2 cache: 3 MiB
L3 cache: 30 MiB
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23
Vulnerability Itlb multihit: KVM: Mitigation: Split huge pages
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Full generic retpoline, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx es
t tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm cpuid_fault pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms xsaveopt dtherm ida arat pln pts md_clear flush_l1d
hugepage info
# grep Huge /proc/meminfo
AnonHugePages: 1214464 kB
ShmemHugePages: 0 kB
FileHugePages: 0 kB
HugePages_Total: 8192
HugePages_Free: 5846
HugePages_Rsvd: 1488
HugePages_Surp: 0
Hugepagesize: 2048 kB
Hugetlb: 16777216 kB
vm startup script
/usr/bin/qemu-system-x86_64 \
-enable-kvm \
-cpu host,kvm=off \
-smp 4 \
-m 8192M \
-object memory-backend-file,id=mem,size=1G,mem-path=/dev/hugepages,share=on \
-mem-prealloc \
-chardev socket,id=char-vm-2004-tpl,path=/var/run/openvswitch-vhost/vhost-vm-2004-tpl,server \
-netdev type=vhost-user,id=net-vm-2004-tpl,chardev=char-vm-2004-tpl,vhostforce \
-device virtio-net-pci,mac=52:54:14:cb:ab:6c,netdev=net-vm-2004-tpl \
-drive file=/opt/image/ubuntu-2004-tpl.img,if=virtio \
-vga qxl \
-spice port=15937,disable-ticketing \
-qmp tcp:0.0.0.0:25937,server,nowait \
-daemonize
ovs status
# ovs-vsctl show
2a4487e3-124a-4b66-92e1-1e824fd9a138
Bridge br0
datapath_type: netdev
Port vhost-vm-2004-tpl
Interface vhost-vm-2004-tpl
type: dpdkvhostuserclient
options: {vhost-server-path="/var/run/openvswitch-vhost/vhost-vm-2004-tpl"}
Port dpdk-p0
Interface dpdk-p0
type: dpdk
options: {dpdk-devargs="0000:07:00.0"}
Port br0
Interface br0
type: internal
ovs_version: "2.14.90"
ovs OpenFlow status
# ovs-ofctl show br0
OFPT_FEATURES_REPLY (xid=0x2): dpid:0000ecf4bbe2f494
n_tables:254, n_buffers:0
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
1(dpdk-p0): addr:ec:f4:bb:e2:f4:94
config: 0
state: 0
current: 1GB-FD AUTO_NEG
speed: 1000 Mbps now, 0 Mbps max
2(vhost-vm-2004-t): addr:00:00:00:00:00:00
config: 0
state: LINK_DOWN
speed: 0 Mbps now, 0 Mbps max
LOCAL(br0): addr:ec:f4:bb:e2:f4:94
config: 0
state: 0
current: 10MB-FD COPPER
speed: 10 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0
the vm can't get ip address by dhcp, and when i run command " ip link set ens3 up " on the vm,
it tell me "RTNETLINK answers: Operation not permitted"
the kernel version of host is
5.4.0-52-generic #57-Ubuntu SMP Thu Oct 15 10:57:00 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
the kernel version of vm is
5.4.0-47-generic #51-Ubuntu SMP Fri Sep 4 19:50:52 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
DPDK version:
DPDK 19.11.5
open vSwitch version:
2.14.90 (commit id:93023e80bd13ec1f09831eba484cf4621582d1a5 of https://github.com/openvswitch/ovs branch master)
ovs full log
2020-10-27T17:46:36.950Z|00001|vlog|INFO|opened log file /usr/local/var/log/openvswitch/ovs-vswitchd.log
2020-10-27T17:46:36.982Z|00002|ovs_numa|INFO|Discovered 12 CPU cores on NUMA node 1
2020-10-27T17:46:36.982Z|00003|ovs_numa|INFO|Discovered 12 CPU cores on NUMA node 0
2020-10-27T17:46:36.982Z|00004|ovs_numa|INFO|Discovered 2 NUMA nodes and 24 CPU cores
2020-10-27T17:46:36.983Z|00005|reconnect|INFO|unix:/usr/local/var/run/openvswitch/db.sock: connecting...
2020-10-27T17:46:36.983Z|00006|reconnect|INFO|unix:/usr/local/var/run/openvswitch/db.sock: connected
2020-10-27T17:46:36.985Z|00007|dpdk|INFO|Using DPDK 19.11.5
2020-10-27T17:46:36.985Z|00008|dpdk|INFO|DPDK Enabled - initializing...
2020-10-27T17:46:36.985Z|00009|dpdk|INFO|No vhost-sock-dir provided - defaulting to /usr/local/var/run/openvswitch
2020-10-27T17:46:36.985Z|00010|dpdk|INFO|IOMMU support for vhost-user-client disabled.
2020-10-27T17:46:36.985Z|00011|dpdk|INFO|POSTCOPY support for vhost-user-client disabled.
2020-10-27T17:46:36.985Z|00012|dpdk|INFO|Per port memory for DPDK devices disabled.
2020-10-27T17:46:36.985Z|00013|dpdk|INFO|EAL ARGS: ovs-vswitchd -c 0x6 --socket-mem 1024,1024 --socket-limit 1024,1024.
2020-10-27T17:46:36.988Z|00014|dpdk|INFO|EAL: Detected 24 lcore(s)
2020-10-27T17:46:36.988Z|00015|dpdk|INFO|EAL: Detected 2 NUMA nodes
2020-10-27T17:46:37.026Z|00016|dpdk|INFO|EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
2020-10-27T17:46:37.042Z|00017|dpdk|INFO|EAL: Selected IOVA mode 'PA'
2020-10-27T17:46:37.051Z|00018|dpdk|WARN|EAL: No free hugepages reported in hugepages-1048576kB
2020-10-27T17:46:37.051Z|00019|dpdk|WARN|EAL: No free hugepages reported in hugepages-1048576kB
2020-10-27T17:46:37.051Z|00020|dpdk|WARN|EAL: No available hugepages reported in hugepages-1048576kB
2020-10-27T17:46:37.051Z|00021|dpdk|INFO|EAL: Probing VFIO support...
2020-10-27T17:46:37.051Z|00022|dpdk|INFO|EAL: VFIO support initialized
2020-10-27T17:46:37.930Z|00023|dpdk|INFO|EAL: PCI device 0000:01:00.0 on NUMA socket 0
2020-10-27T17:46:37.930Z|00024|dpdk|INFO|EAL: probe driver: 8086:1528 net_ixgbe
2020-10-27T17:46:37.930Z|00025|dpdk|INFO|EAL: PCI device 0000:01:00.1 on NUMA socket 0
2020-10-27T17:46:37.930Z|00026|dpdk|INFO|EAL: probe driver: 8086:1528 net_ixgbe
2020-10-27T17:46:37.930Z|00027|dpdk|INFO|EAL: PCI device 0000:07:00.0 on NUMA socket 0
2020-10-27T17:46:37.930Z|00028|dpdk|INFO|EAL: probe driver: 8086:1521 net_e1000_igb
2020-10-27T17:46:37.995Z|00029|dpdk|INFO|EAL: PCI device 0000:07:00.1 on NUMA socket 0
2020-10-27T17:46:37.996Z|00030|dpdk|INFO|EAL: probe driver: 8086:1521 net_e1000_igb
2020-10-27T17:46:38.067Z|00031|dpdk|INFO|DPDK Enabled - initialized
2020-10-27T17:46:38.071Z|00032|pmd_perf|INFO|DPDK provided TSC frequency: 2600000 KHz
2020-10-27T17:46:38.083Z|00033|ofproto_dpif|INFO|netdev#ovs-netdev: Datapath supports recirculation
2020-10-27T17:46:38.083Z|00034|ofproto_dpif|INFO|netdev#ovs-netdev: VLAN header stack length probed as 1
2020-10-27T17:46:38.083Z|00035|ofproto_dpif|INFO|netdev#ovs-netdev: MPLS label stack length probed as 3
2020-10-27T17:46:38.083Z|00036|ofproto_dpif|INFO|netdev#ovs-netdev: Datapath supports truncate action
2020-10-27T17:46:38.083Z|00037|ofproto_dpif|INFO|netdev#ovs-netdev: Datapath supports unique flow ids
2020-10-27T17:46:38.083Z|00038|ofproto_dpif|INFO|netdev#ovs-netdev: Datapath supports clone action
2020-10-27T17:46:38.083Z|00039|ofproto_dpif|INFO|netdev#ovs-netdev: Max sample nesting level probed as 10
2020-10-27T17:46:38.083Z|00040|ofproto_dpif|INFO|netdev#ovs-netdev: Datapath supports eventmask in conntrack action
2020-10-27T17:46:38.083Z|00041|ofproto_dpif|INFO|netdev#ovs-netdev: Datapath supports ct_clear action
2020-10-27T17:46:38.083Z|00042|ofproto_dpif|INFO|netdev#ovs-netdev: Max dp_hash algorithm probed to be 1
2020-10-27T17:46:38.083Z|00043|ofproto_dpif|INFO|netdev#ovs-netdev: Datapath supports check_pkt_len action
2020-10-27T17:46:38.083Z|00044|ofproto_dpif|INFO|netdev#ovs-netdev: Datapath supports timeout policy in conntrack action
2020-10-27T17:46:38.083Z|00045|ofproto_dpif|INFO|netdev#ovs-netdev: Datapath supports ct_state
2020-10-27T17:46:38.083Z|00046|ofproto_dpif|INFO|netdev#ovs-netdev: Datapath supports ct_zone
2020-10-27T17:46:38.083Z|00047|ofproto_dpif|INFO|netdev#ovs-netdev: Datapath supports ct_mark
2020-10-27T17:46:38.083Z|00048|ofproto_dpif|INFO|netdev#ovs-netdev: Datapath supports ct_label
2020-10-27T17:46:38.083Z|00049|ofproto_dpif|INFO|netdev#ovs-netdev: Datapath supports ct_state_nat
2020-10-27T17:46:38.084Z|00050|ofproto_dpif|INFO|netdev#ovs-netdev: Datapath supports ct_orig_tuple
2020-10-27T17:46:38.084Z|00051|ofproto_dpif|INFO|netdev#ovs-netdev: Datapath supports ct_orig_tuple6
2020-10-27T17:46:38.084Z|00052|ofproto_dpif|INFO|netdev#ovs-netdev: Datapath supports IPv6 ND Extensions
2020-10-27T17:46:38.090Z|00053|bridge|INFO|bridge br0: added interface br0 on port 65534
2020-10-27T17:46:38.090Z|00054|netdev_dpdk|WARN|Failed to enable flow control on device 0
2020-10-27T17:46:38.099Z|00055|dpif_netdev|INFO|PMD thread on numa_id: 1, core id: 5 created.
2020-10-27T17:46:38.107Z|00056|dpif_netdev|INFO|PMD thread on numa_id: 0, core id: 2 created.
2020-10-27T17:46:38.107Z|00057|dpif_netdev|INFO|There are 1 pmd threads on numa node 1
2020-10-27T17:46:38.107Z|00058|dpif_netdev|INFO|There are 1 pmd threads on numa node 0
2020-10-27T17:46:38.107Z|00059|dpdk|INFO|Device with port_id=0 already stopped
2020-10-27T17:46:38.382Z|00060|netdev_dpdk|INFO|Port 0: ec:f4:bb:e2:f4:94
2020-10-27T17:46:38.382Z|00061|dpif_netdev|INFO|Core 2 on numa node 0 assigned port 'dpdk-p0' rx queue 0 (measured processing cycles 0).
2020-10-27T17:46:38.383Z|00062|bridge|INFO|bridge br0: added interface dpdk-p0 on port 1
2020-10-27T17:46:38.385Z|00063|dpdk|INFO|VHOST_CONFIG: Linear buffers requested without external buffers, disabling host segmentation offloading support
2020-10-27T17:46:38.390Z|00064|dpdk|INFO|VHOST_CONFIG: vhost-user client: socket created, fd: 1091
2020-10-27T17:46:38.390Z|00065|netdev_dpdk|INFO|vHost User device 'vhost-vm-2004-tpl' created in 'client' mode, using client socket '/var/run/openvswitch-vhost/vhost-vm-2004-tpl'
2020-10-27T17:46:38.394Z|00066|dpdk|WARN|VHOST_CONFIG: failed to connect to /var/run/openvswitch-vhost/vhost-vm-2004-tpl: No such file or directory
2020-10-27T17:46:38.394Z|00067|dpdk|INFO|VHOST_CONFIG: /var/run/openvswitch-vhost/vhost-vm-2004-tpl: reconnecting...
2020-10-27T17:46:38.538Z|00068|dpif_netdev|INFO|Core 2 on numa node 0 assigned port 'dpdk-p0' rx queue 0 (measured processing cycles 0).
2020-10-27T17:46:38.538Z|00069|dpif_netdev|INFO|Core 5 on numa node 1 assigned port 'vhost-vm-2004-tpl' rx queue 0 (measured processing cycles 0).
2020-10-27T17:46:38.538Z|00070|bridge|INFO|bridge br0: added interface vhost-vm-2004-tpl on port 2
2020-10-27T17:46:38.538Z|00071|bridge|INFO|bridge br0: using datapath ID 0000ecf4bbe2f494
2020-10-27T17:46:38.539Z|00072|connmgr|INFO|br0: added service controller "punix:/usr/local/var/run/openvswitch/br0.mgmt"
2020-10-27T17:46:38.539Z|00073|timeval|WARN|Unreasonably long 1554ms poll interval (361ms user, 789ms system)
2020-10-27T17:46:38.539Z|00074|timeval|WARN|faults: 36263 minor, 0 major
2020-10-27T17:46:38.539Z|00075|timeval|WARN|disk: 0 reads, 24 writes
2020-10-27T17:46:38.539Z|00076|timeval|WARN|context switches: 857 voluntary, 1425 involuntary
2020-10-27T17:46:38.539Z|00077|coverage|INFO|Event coverage, avg rate over last: 5 seconds, last minute, last hour, hash=072f9aca:
2020-10-27T17:46:38.539Z|00078|coverage|INFO|bridge_reconfigure 0.0/sec 0.000/sec 0.0000/sec total: 1
2020-10-27T17:46:38.540Z|00079|coverage|INFO|ofproto_flush 0.0/sec 0.000/sec 0.0000/sec total: 1
2020-10-27T17:46:38.540Z|00080|coverage|INFO|ofproto_update_port 0.0/sec 0.000/sec 0.0000/sec total: 3
2020-10-27T17:46:38.540Z|00081|coverage|INFO|rev_flow_table 0.0/sec 0.000/sec 0.0000/sec total: 1
2020-10-27T17:46:38.540Z|00082|coverage|INFO|cmap_expand 0.0/sec 0.000/sec 0.0000/sec total: 44
2020-10-27T17:46:38.540Z|00083|coverage|INFO|cmap_shrink 0.0/sec 0.000/sec 0.0000/sec total: 25
2020-10-27T17:46:38.540Z|00084|coverage|INFO|datapath_drop_upcall_error 0.0/sec 0.000/sec 0.0000/sec total: 2
2020-10-27T17:46:38.540Z|00085|coverage|INFO|dpif_port_add 0.0/sec 0.000/sec 0.0000/sec total: 3
2020-10-27T17:46:38.540Z|00086|coverage|INFO|dpif_flow_flush 0.0/sec 0.000/sec 0.0000/sec total: 1
2020-10-27T17:46:38.540Z|00087|coverage|INFO|dpif_flow_get 0.0/sec 0.000/sec 0.0000/sec total: 23
2020-10-27T17:46:38.540Z|00088|coverage|INFO|dpif_flow_put 0.0/sec 0.000/sec 0.0000/sec total: 24
2020-10-27T17:46:38.540Z|00089|coverage|INFO|dpif_flow_del 0.0/sec 0.000/sec 0.0000/sec total: 23
2020-10-27T17:46:38.540Z|00090|coverage|INFO|dpif_execute 0.0/sec 0.000/sec 0.0000/sec total: 6
2020-10-27T17:46:38.540Z|00091|coverage|INFO|flow_extract 0.0/sec 0.000/sec 0.0000/sec total: 4
2020-10-27T17:46:38.540Z|00092|coverage|INFO|miniflow_malloc 0.0/sec 0.000/sec 0.0000/sec total: 35
2020-10-27T17:46:38.540Z|00093|coverage|INFO|hmap_pathological 0.0/sec 0.000/sec 0.0000/sec total: 4
2020-10-27T17:46:38.540Z|00094|coverage|INFO|hmap_expand 0.0/sec 0.000/sec 0.0000/sec total: 492
2020-10-27T17:46:38.540Z|00095|coverage|INFO|hmap_shrink 0.0/sec 0.000/sec 0.0000/sec total: 2
2020-10-27T17:46:38.540Z|00096|coverage|INFO|netdev_received 0.0/sec 0.000/sec 0.0000/sec total: 1
2020-10-27T17:46:38.540Z|00097|coverage|INFO|netdev_get_stats 0.0/sec 0.000/sec 0.0000/sec total: 3
2020-10-27T17:46:38.540Z|00098|coverage|INFO|poll_create_node 0.0/sec 0.000/sec 0.0000/sec total: 30
2020-10-27T17:46:38.540Z|00099|coverage|INFO|poll_zero_timeout 0.0/sec 0.000/sec 0.0000/sec total: 1
2020-10-27T17:46:38.540Z|00100|coverage|INFO|seq_change 0.0/sec 0.000/sec 0.0000/sec total: 137
2020-10-27T17:46:38.540Z|00101|coverage|INFO|pstream_open 0.0/sec 0.000/sec 0.0000/sec total: 3
2020-10-27T17:46:38.540Z|00102|coverage|INFO|stream_open 0.0/sec 0.000/sec 0.0000/sec total: 1
2020-10-27T17:46:38.540Z|00103|coverage|INFO|util_xalloc 0.0/sec 0.000/sec 0.0000/sec total: 9631
2020-10-27T17:46:38.540Z|00104|coverage|INFO|netdev_set_policing 0.0/sec 0.000/sec 0.0000/sec total: 1
2020-10-27T17:46:38.540Z|00105|coverage|INFO|netdev_get_ethtool 0.0/sec 0.000/sec 0.0000/sec total: 2
2020-10-27T17:46:38.540Z|00106|coverage|INFO|netlink_received 0.0/sec 0.000/sec 0.0000/sec total: 87
2020-10-27T17:46:38.540Z|00107|coverage|INFO|netlink_recv_jumbo 0.0/sec 0.000/sec 0.0000/sec total: 19
2020-10-27T17:46:38.540Z|00108|coverage|INFO|netlink_sent 0.0/sec 0.000/sec 0.0000/sec total: 85
2020-10-27T17:46:38.540Z|00109|coverage|INFO|111 events never hit
2020-10-27T17:46:38.546Z|00110|netdev_dpdk|WARN|Failed to enable flow control on device 0
2020-10-27T17:46:38.547Z|00111|bridge|INFO|ovs-vswitchd (Open vSwitch) 2.14.90
2020-10-27T17:46:47.093Z|00112|memory|INFO|196052 kB peak resident set size after 10.1 seconds
2020-10-27T17:46:47.093Z|00113|memory|INFO|handlers:1 ports:3 revalidators:1 rules:5 udpif keys:2
2020-10-27T17:46:58.392Z|00001|dpdk|INFO|VHOST_CONFIG: /var/run/openvswitch-vhost/vhost-vm-2004-tpl: connected
2020-10-27T17:46:58.392Z|00002|dpdk|INFO|VHOST_CONFIG: new device, handle is 0
2020-10-27T17:46:58.396Z|00001|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
2020-10-27T17:46:58.396Z|00002|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_GET_PROTOCOL_FEATURES
2020-10-27T17:46:58.396Z|00003|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_SET_PROTOCOL_FEATURES
2020-10-27T17:46:58.396Z|00004|dpdk|INFO|VHOST_CONFIG: negotiated Vhost-user protocol features: 0xcb7
2020-10-27T17:46:58.396Z|00005|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_GET_QUEUE_NUM
2020-10-27T17:46:58.396Z|00006|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_SET_SLAVE_REQ_FD
2020-10-27T17:46:58.396Z|00007|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_SET_OWNER
2020-10-27T17:46:58.396Z|00008|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
2020-10-27T17:46:58.396Z|00009|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
2020-10-27T17:46:58.396Z|00010|dpdk|INFO|VHOST_CONFIG: vring call idx:0 file:1100
2020-10-27T17:46:58.396Z|00011|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
2020-10-27T17:46:58.396Z|00012|dpdk|INFO|VHOST_CONFIG: vring call idx:1 file:1101
2020-10-27T17:47:01.905Z|00013|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
2020-10-27T17:47:01.905Z|00014|dpdk|INFO|VHOST_CONFIG: set queue enable: 1 to qp idx: 0
2020-10-27T17:47:01.905Z|00015|netdev_dpdk|INFO|State of queue 0 ( tx_qid 0 ) of vhost device '/var/run/openvswitch-vhost/vhost-vm-2004-tpl' changed to 'enabled'
2020-10-27T17:47:01.905Z|00016|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
2020-10-27T17:47:01.905Z|00017|dpdk|INFO|VHOST_CONFIG: set queue enable: 1 to qp idx: 1
2020-10-27T17:47:01.905Z|00018|netdev_dpdk|INFO|State of queue 1 ( rx_qid 0 ) of vhost device '/var/run/openvswitch-vhost/vhost-vm-2004-tpl' changed to 'enabled'
2020-10-27T17:47:01.905Z|00019|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
2020-10-27T17:47:01.905Z|00020|dpdk|INFO|VHOST_CONFIG: set queue enable: 1 to qp idx: 0
2020-10-27T17:47:01.905Z|00021|netdev_dpdk|INFO|State of queue 0 ( tx_qid 0 ) of vhost device '/var/run/openvswitch-vhost/vhost-vm-2004-tpl' changed to 'enabled'
2020-10-27T17:47:01.905Z|00022|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
2020-10-27T17:47:01.905Z|00023|dpdk|INFO|VHOST_CONFIG: set queue enable: 1 to qp idx: 1
2020-10-27T17:47:01.905Z|00024|netdev_dpdk|INFO|State of queue 1 ( rx_qid 0 ) of vhost device '/var/run/openvswitch-vhost/vhost-vm-2004-tpl' changed to 'enabled'
2020-10-27T17:47:01.908Z|00025|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_SET_FEATURES
2020-10-27T17:47:01.908Z|00026|dpdk|INFO|VHOST_CONFIG: negotiated Virtio features: 0x17020a782
2020-10-27T17:47:50.172Z|00001|dpif_netdev(revalidator6)|ERR|internal error parsing flow key skb_priority(0),skb_mark(0),ct_state(0),ct_zone(0),ct_mark(0),ct_label(0),recirc_id(0),dp_hash(0),in_port(2),packet_type(ns=0,id=0),eth(src=54:b2:03:14:d0:39,dst=01:00:5e:00:00:01),eth_type(0x0800),ipv4(src=0.0.0.0,dst=224.0.0.1,proto=2,tos=0xc0,ttl=1,frag=no)
2020-10-27T17:47:50.172Z|00002|dpif(revalidator6)|WARN|netdev#ovs-netdev: failed to put[modify] (Invalid argument) ufid:94f25b77-62c5-4859-aec3-e9a41c72dc3d skb_priority(0/0),skb_mark(0/0),ct_state(0/0),ct_zone(0/0),ct_mark(0/0),ct_label(0/0),recirc_id(0),dp_hash(0/0),in_port(2),packet_type(ns=0,id=0),eth(src=54:b2:03:14:d0:39,dst=01:00:5e:00:00:01),eth_type(0x0800),ipv4(src=0.0.0.0/0.0.0.0,dst=224.0.0.1/0.0.0.0,proto=2/0,tos=0xc0/0,ttl=1/0,frag=no), actions:userspace(pid=0,slow_path(match))
2020-10-27T17:47:52.680Z|00003|dpif_netdev(revalidator6)|ERR|internal error parsing flow key skb_priority(0),skb_mark(0),ct_state(0),ct_zone(0),ct_mark(0),ct_label(0),recirc_id(0),dp_hash(0),in_port(2),packet_type(ns=0,id=0),eth(src=44:2c:05:ce:8d:03,dst=01:00:5e:7f:ff:fa),eth_type(0x0800),ipv4(src=192.168.27.150,dst=239.255.255.250,proto=2,tos=0xc0,ttl=1,frag=no)
2020-10-27T17:47:52.680Z|00004|dpif(revalidator6)|WARN|netdev#ovs-netdev: failed to put[modify] (Invalid argument) ufid:aff312f9-4416-49e4-a314-9f895aa96de1 skb_priority(0/0),skb_mark(0/0),ct_state(0/0),ct_zone(0/0),ct_mark(0/0),ct_label(0/0),recirc_id(0),dp_hash(0/0),in_port(2),packet_type(ns=0,id=0),eth(src=44:2c:05:ce:8d:03,dst=01:00:5e:7f:ff:fa),eth_type(0x0800),ipv4(src=192.168.27.150/0.0.0.0,dst=239.255.255.250/0.0.0.0,proto=2/0,tos=0xc0/0,ttl=1/0,frag=no), actions:userspace(pid=0,slow_path(match))
2020-10-27T17:47:55.009Z|00005|dpif_netdev(revalidator6)|ERR|internal error parsing flow key skb_priority(0),skb_mark(0),ct_state(0),ct_zone(0),ct_mark(0),ct_label(0),recirc_id(0),dp_hash(0),in_port(2),packet_type(ns=0,id=0),eth(src=00:02:c9:50:8a:f0,dst=01:00:5e:00:00:fb),eth_type(0x0800),ipv4(src=192.168.27.232,dst=224.0.0.251,proto=2,tos=0,ttl=1,frag=no)
2020-10-27T17:47:55.009Z|00006|dpif_netdev(revalidator6)|ERR|internal error parsing flow key skb_priority(0),skb_mark(0),ct_state(0),ct_zone(0),ct_mark(0),ct_label(0),recirc_id(0),dp_hash(0),in_port(2),packet_type(ns=0,id=0),eth(src=56:ed:b8:d2:f1:e3,dst=01:00:5e:00:00:6a),eth_type(0x0800),ipv4(src=192.168.27.101,dst=224.0.0.106,proto=2,tos=0xc0,ttl=1,frag=no)
2020-10-27T17:47:55.009Z|00007|dpif(revalidator6)|WARN|netdev#ovs-netdev: failed to put[modify] (Invalid argument) ufid:b108050d-511e-447d-8837-35af4af81c4e skb_priority(0/0),skb_mark(0/0),ct_state(0/0),ct_zone(0/0),ct_mark(0/0),ct_label(0/0),recirc_id(0),dp_hash(0/0),in_port(2),packet_type(ns=0,id=0),eth(src=00:02:c9:50:8a:f0,dst=01:00:5e:00:00:fb),eth_type(0x0800),ipv4(src=192.168.27.232/0.0.0.0,dst=224.0.0.251/0.0.0.0,proto=2/0,tos=0/0,ttl=1/0,frag=no), actions:userspace(pid=0,slow_path(match))
2020-10-27T17:47:55.009Z|00008|dpif(revalidator6)|WARN|netdev#ovs-netdev: failed to put[modify] (Invalid argument) ufid:02feab90-66a5-484c-bbc5-8e97985d1f73 skb_priority(0/0),skb_mark(0/0),ct_state(0/0),ct_zone(0/0),ct_mark(0/0),ct_label(0/0),recirc_id(0),dp_hash(0/0),in_port(2),packet_type(ns=0,id=0),eth(src=56:ed:b8:d2:f1:e3,dst=01:00:5e:00:00:6a),eth_type(0x0800),ipv4(src=192.168.27.101/0.0.0.0,dst=224.0.0.106/0.0.0.0,proto=2/0,tos=0xc0/0,ttl=1/0,frag=no), actions:userspace(pid=0,slow_path(match))
2020-10-27T17:47:56.014Z|00009|dpif_netdev(revalidator6)|ERR|internal error parsing flow key skb_priority(0),skb_mark(0),ct_state(0),ct_zone(0),ct_mark(0),ct_label(0),recirc_id(0),dp_hash(0),in_port(2),packet_type(ns=0,id=0),eth(src=00:02:c9:50:8a:f0,dst=01:00:5e:00:00:fc),eth_type(0x0800),ipv4(src=192.168.27.232,dst=224.0.0.252,proto=2,tos=0,ttl=1,frag=no)
2020-10-27T17:47:56.014Z|00010|dpif(revalidator6)|WARN|netdev#ovs-netdev: failed to put[modify] (Invalid argument) ufid:22a5115c-c730-42b2-a590-87b999192781 skb_priority(0/0),skb_mark(0/0),ct_state(0/0),ct_zone(0/0),ct_mark(0/0),ct_label(0/0),recirc_id(0),dp_hash(0/0),in_port(2),packet_type(ns=0,id=0),eth(src=00:02:c9:50:8a:f0,dst=01:00:5e:00:00:fc),eth_type(0x0800),ipv4(src=192.168.27.232/0.0.0.0,dst=224.0.0.252/0.0.0.0,proto=2/0,tos=0/0,ttl=1/0,frag=no), actions:userspace(pid=0,slow_path(match))

Want to test turn server - With puppeteer and https://webrtc.github.io/samples/src/content/peerconnection/trickle-ice/

What I want to do:
I want to test my stun and turn server by using puppeteer to use the google webrtc example implementations automatically.
Problem:
Using puppeteer returns different local addresses and no IPv6.
const puppeteer = require('puppeteer-core');
(async () => {
const browser = await puppeteer.launch({
headless: false,
defaultViewport: null,
executablePath:
'C:/Program Files (x86)/Google/Chrome/Application/chrome.exe',
args: [
"--no-sandbox"
]
});
const pages = await browser.pages();
const page = pages[0];
await page.goto('https://webrtc.github.io/samples/src/content/peerconnection/trickle-ice/');
})();
With Puppeteer:
0.010 rtp host 3103129966 udp d5f4bf78-64bb-4f9c-9ae4-ee0a4c3892de.local 58262 126 | 30 | 255
0.011 rtp host 2094226564 udp e657b779-3563-4753-aa3f-a533494f02aa.local 58263 126 | 40 | 255
0.070 rtp srflx 842163049 udp 80.142.xxx.xxx 58262 100 | 30 | 255
```
Without puppeteer:
```
0.003 rtp host 3103129966 udp 192.168.2.111 59612 126 | 30 | 255
0.003 rtp host 2094226564 udp [2003:f0:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx] 59613 126 | 40 | 255
0.062 rtp srflx 842163049 udp 80.142.xxx.xxxx 59612 100 | 30 | 255
```
Do I miss a config parameter could not find anything so far.
https://peter.sh/experiments/chromium-command-line-switches/
TRY/ERROR different parameters
```
"--enforce-webrtc-ip-permission-check",
"--force-webrtc-ip-handling-policy",
"--webrtc-stun-probe-trial",
"--enable-webrtc-stun-origin
```
thx
You're running into similar problems as described here.
In pupeeter you never called getUserMedia so host candidates will be obfuscated using mdns. In the browser you did at some point and that information is persisted.
--disable-features=WebRtcHideLocalIpsWithMdns will disable mdns but note that the host candidates you get are irrelevant to answering the question whether the TURN server works (which it does not in your case as there is no relay candidate)

Flask API server slow response time

I created an API server with Flask, I use gunicorn with eventlet to run it. I noticed a long response time from Flask server when calling APIs. I did a profiling with my client, one ran from my laptop, one ran directly in Flask API server.
From my laptop:
302556 function calls (295712 primitive calls) in 5.594 seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
72 4.370 0.061 4.370 0.061 {method 'poll' of 'select.epoll' objects}
16 0.374 0.023 0.374 0.023 {method 'connect' of '_socket.socket' objects}
16 0.213 0.013 0.213 0.013 {method 'load_verify_locations' of '_ssl._SSLContext' objects}
16 0.053 0.003 0.058 0.004 httplib.py:798(close)
52 0.034 0.001 0.034 0.001 {method 'do_handshake' of '_ssl._SSLSocket' objects}
On server:
231449 function calls (225936 primitive calls) in 3.320 seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
12 2.132 0.178 2.132 0.178 {built-in method read}
13 0.286 0.022 0.286 0.022 {method 'poll' of 'select.epoll' objects}
12 0.119 0.010 0.119 0.010 {_ssl.sslwrap}
12 0.095 0.008 0.095 0.008 {built-in method do_handshake}
855/222 0.043 0.000 0.116 0.001 sre_parse.py:379(_parse)
1758/218 0.029 0.000 0.090 0.000 sre_compile.py:32(_compile)
1013 0.027 0.000 0.041 0.000 sre_compile.py:207(_optimize_charset)
12429 0.023 0.000 0.029 0.000 sre_parse.py:182(__next)
So, I saw my client took long time to wait from server response base on the profile result.
I used gunicorn with eventlet to serve Flask app, with the flowing configuration:
import multiprocessing
bind = ['0.0.0.0:8000']
backlog = 2048
workers = multiprocessing.cpu_count() * 2 + 1
worker_class = 'eventlet'
user = 'www-data'
group = 'www-data'
loglevel = 'info'
My client is an custom HTTP client using eventlet to patch httplib2 and create a pool to connect to server.
I stuck here with the troubleshooting. All server stats were normal. How can I detect the bottle neck of my API server?

Unit testing and error 500

I'm an absolute newbie to Unit Testing, but I feel the need to learn something about it while I'm making my switch to CakePHP 3.
Following the manual, I've installed phpunit through composer (the same as I did for the whole cake package), created an empty test database and given config/app.php the right informations on how to connect.
Through the bake plugin I've baked some tests (I've got 19 tests now). Each one of them is (correctly) marked as incomplete.
Now, I'm trying to write a test for one of my controller's index function, this is what I've done:
public function testIndex()
{
$this->Session([
'Auth' => [
'User' => [
'id' => 1,
'email' => 'test#test.com',
'level' => 'adm',
]
]
]);
$this->get('/invoices');
$this->assertResponseOk();
}
The problem is that it just doesn't work, and I don't know what is throwing that 500 error...
Status code is not between 200 and 204
Failed asserting that 500 is equal to 204 or is less than 204.
What am I doing wrong?