qemu kvm vm can't access network with ovs dpdk - qemu

i'm using ovs with dpdk to improve network performance, but i can't resolve the problem by my self
DPDK dev bind script output
# dpdk-devbind.py -s
Network devices using DPDK-compatible driver
============================================
0000:07:00.0 'I350 Gigabit Network Connection 1521' drv=igb_uio unused=igb,vfio-pci,uio_pci_generic
0000:07:00.1 'I350 Gigabit Network Connection 1521' drv=igb_uio unused=igb,vfio-pci,uio_pci_generic
Network devices using kernel driver
===================================
0000:01:00.0 'Ethernet Controller 10-Gigabit X540-AT2 1528' if=eno1 drv=ixgbe unused=igb_uio,vfio-pci,uio_pci_generic *Active*
0000:01:00.1 'Ethernet Controller 10-Gigabit X540-AT2 1528' if=eno2 drv=ixgbe unused=igb_uio,vfio-pci,uio_pci_generic
No 'Baseband' devices detected
==============================
No 'Crypto' devices detected
============================
No 'Eventdev' devices detected
==============================
No 'Mempool' devices detected
=============================
No 'Compress' devices detected
==============================
No 'Misc (rawdev)' devices detected
===================================
ovs config
# ovs-vsctl --no-wait get Open_vSwitch . other_config
{dpdk-init="true", dpdk-lcore-mask="0x6", pmd-cpu-mask="0x24"}
cpu info
# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 24
On-line CPU(s) list: 0-23
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 62
Model name: Intel(R) Xeon(R) CPU E5-2630 v2 # 2.60GHz
Stepping: 4
CPU MHz: 2965.447
CPU max MHz: 3100.0000
CPU min MHz: 1200.0000
BogoMIPS: 5199.97
Virtualization: VT-x
L1d cache: 384 KiB
L1i cache: 384 KiB
L2 cache: 3 MiB
L3 cache: 30 MiB
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23
Vulnerability Itlb multihit: KVM: Mitigation: Split huge pages
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Full generic retpoline, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx es
t tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm cpuid_fault pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms xsaveopt dtherm ida arat pln pts md_clear flush_l1d
hugepage info
# grep Huge /proc/meminfo
AnonHugePages: 1214464 kB
ShmemHugePages: 0 kB
FileHugePages: 0 kB
HugePages_Total: 8192
HugePages_Free: 5846
HugePages_Rsvd: 1488
HugePages_Surp: 0
Hugepagesize: 2048 kB
Hugetlb: 16777216 kB
vm startup script
/usr/bin/qemu-system-x86_64 \
-enable-kvm \
-cpu host,kvm=off \
-smp 4 \
-m 8192M \
-object memory-backend-file,id=mem,size=1G,mem-path=/dev/hugepages,share=on \
-mem-prealloc \
-chardev socket,id=char-vm-2004-tpl,path=/var/run/openvswitch-vhost/vhost-vm-2004-tpl,server \
-netdev type=vhost-user,id=net-vm-2004-tpl,chardev=char-vm-2004-tpl,vhostforce \
-device virtio-net-pci,mac=52:54:14:cb:ab:6c,netdev=net-vm-2004-tpl \
-drive file=/opt/image/ubuntu-2004-tpl.img,if=virtio \
-vga qxl \
-spice port=15937,disable-ticketing \
-qmp tcp:0.0.0.0:25937,server,nowait \
-daemonize
ovs status
# ovs-vsctl show
2a4487e3-124a-4b66-92e1-1e824fd9a138
Bridge br0
datapath_type: netdev
Port vhost-vm-2004-tpl
Interface vhost-vm-2004-tpl
type: dpdkvhostuserclient
options: {vhost-server-path="/var/run/openvswitch-vhost/vhost-vm-2004-tpl"}
Port dpdk-p0
Interface dpdk-p0
type: dpdk
options: {dpdk-devargs="0000:07:00.0"}
Port br0
Interface br0
type: internal
ovs_version: "2.14.90"
ovs OpenFlow status
# ovs-ofctl show br0
OFPT_FEATURES_REPLY (xid=0x2): dpid:0000ecf4bbe2f494
n_tables:254, n_buffers:0
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
1(dpdk-p0): addr:ec:f4:bb:e2:f4:94
config: 0
state: 0
current: 1GB-FD AUTO_NEG
speed: 1000 Mbps now, 0 Mbps max
2(vhost-vm-2004-t): addr:00:00:00:00:00:00
config: 0
state: LINK_DOWN
speed: 0 Mbps now, 0 Mbps max
LOCAL(br0): addr:ec:f4:bb:e2:f4:94
config: 0
state: 0
current: 10MB-FD COPPER
speed: 10 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0
the vm can't get ip address by dhcp, and when i run command " ip link set ens3 up " on the vm,
it tell me "RTNETLINK answers: Operation not permitted"
the kernel version of host is
5.4.0-52-generic #57-Ubuntu SMP Thu Oct 15 10:57:00 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
the kernel version of vm is
5.4.0-47-generic #51-Ubuntu SMP Fri Sep 4 19:50:52 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
DPDK version:
DPDK 19.11.5
open vSwitch version:
2.14.90 (commit id:93023e80bd13ec1f09831eba484cf4621582d1a5 of https://github.com/openvswitch/ovs branch master)
ovs full log
2020-10-27T17:46:36.950Z|00001|vlog|INFO|opened log file /usr/local/var/log/openvswitch/ovs-vswitchd.log
2020-10-27T17:46:36.982Z|00002|ovs_numa|INFO|Discovered 12 CPU cores on NUMA node 1
2020-10-27T17:46:36.982Z|00003|ovs_numa|INFO|Discovered 12 CPU cores on NUMA node 0
2020-10-27T17:46:36.982Z|00004|ovs_numa|INFO|Discovered 2 NUMA nodes and 24 CPU cores
2020-10-27T17:46:36.983Z|00005|reconnect|INFO|unix:/usr/local/var/run/openvswitch/db.sock: connecting...
2020-10-27T17:46:36.983Z|00006|reconnect|INFO|unix:/usr/local/var/run/openvswitch/db.sock: connected
2020-10-27T17:46:36.985Z|00007|dpdk|INFO|Using DPDK 19.11.5
2020-10-27T17:46:36.985Z|00008|dpdk|INFO|DPDK Enabled - initializing...
2020-10-27T17:46:36.985Z|00009|dpdk|INFO|No vhost-sock-dir provided - defaulting to /usr/local/var/run/openvswitch
2020-10-27T17:46:36.985Z|00010|dpdk|INFO|IOMMU support for vhost-user-client disabled.
2020-10-27T17:46:36.985Z|00011|dpdk|INFO|POSTCOPY support for vhost-user-client disabled.
2020-10-27T17:46:36.985Z|00012|dpdk|INFO|Per port memory for DPDK devices disabled.
2020-10-27T17:46:36.985Z|00013|dpdk|INFO|EAL ARGS: ovs-vswitchd -c 0x6 --socket-mem 1024,1024 --socket-limit 1024,1024.
2020-10-27T17:46:36.988Z|00014|dpdk|INFO|EAL: Detected 24 lcore(s)
2020-10-27T17:46:36.988Z|00015|dpdk|INFO|EAL: Detected 2 NUMA nodes
2020-10-27T17:46:37.026Z|00016|dpdk|INFO|EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
2020-10-27T17:46:37.042Z|00017|dpdk|INFO|EAL: Selected IOVA mode 'PA'
2020-10-27T17:46:37.051Z|00018|dpdk|WARN|EAL: No free hugepages reported in hugepages-1048576kB
2020-10-27T17:46:37.051Z|00019|dpdk|WARN|EAL: No free hugepages reported in hugepages-1048576kB
2020-10-27T17:46:37.051Z|00020|dpdk|WARN|EAL: No available hugepages reported in hugepages-1048576kB
2020-10-27T17:46:37.051Z|00021|dpdk|INFO|EAL: Probing VFIO support...
2020-10-27T17:46:37.051Z|00022|dpdk|INFO|EAL: VFIO support initialized
2020-10-27T17:46:37.930Z|00023|dpdk|INFO|EAL: PCI device 0000:01:00.0 on NUMA socket 0
2020-10-27T17:46:37.930Z|00024|dpdk|INFO|EAL: probe driver: 8086:1528 net_ixgbe
2020-10-27T17:46:37.930Z|00025|dpdk|INFO|EAL: PCI device 0000:01:00.1 on NUMA socket 0
2020-10-27T17:46:37.930Z|00026|dpdk|INFO|EAL: probe driver: 8086:1528 net_ixgbe
2020-10-27T17:46:37.930Z|00027|dpdk|INFO|EAL: PCI device 0000:07:00.0 on NUMA socket 0
2020-10-27T17:46:37.930Z|00028|dpdk|INFO|EAL: probe driver: 8086:1521 net_e1000_igb
2020-10-27T17:46:37.995Z|00029|dpdk|INFO|EAL: PCI device 0000:07:00.1 on NUMA socket 0
2020-10-27T17:46:37.996Z|00030|dpdk|INFO|EAL: probe driver: 8086:1521 net_e1000_igb
2020-10-27T17:46:38.067Z|00031|dpdk|INFO|DPDK Enabled - initialized
2020-10-27T17:46:38.071Z|00032|pmd_perf|INFO|DPDK provided TSC frequency: 2600000 KHz
2020-10-27T17:46:38.083Z|00033|ofproto_dpif|INFO|netdev#ovs-netdev: Datapath supports recirculation
2020-10-27T17:46:38.083Z|00034|ofproto_dpif|INFO|netdev#ovs-netdev: VLAN header stack length probed as 1
2020-10-27T17:46:38.083Z|00035|ofproto_dpif|INFO|netdev#ovs-netdev: MPLS label stack length probed as 3
2020-10-27T17:46:38.083Z|00036|ofproto_dpif|INFO|netdev#ovs-netdev: Datapath supports truncate action
2020-10-27T17:46:38.083Z|00037|ofproto_dpif|INFO|netdev#ovs-netdev: Datapath supports unique flow ids
2020-10-27T17:46:38.083Z|00038|ofproto_dpif|INFO|netdev#ovs-netdev: Datapath supports clone action
2020-10-27T17:46:38.083Z|00039|ofproto_dpif|INFO|netdev#ovs-netdev: Max sample nesting level probed as 10
2020-10-27T17:46:38.083Z|00040|ofproto_dpif|INFO|netdev#ovs-netdev: Datapath supports eventmask in conntrack action
2020-10-27T17:46:38.083Z|00041|ofproto_dpif|INFO|netdev#ovs-netdev: Datapath supports ct_clear action
2020-10-27T17:46:38.083Z|00042|ofproto_dpif|INFO|netdev#ovs-netdev: Max dp_hash algorithm probed to be 1
2020-10-27T17:46:38.083Z|00043|ofproto_dpif|INFO|netdev#ovs-netdev: Datapath supports check_pkt_len action
2020-10-27T17:46:38.083Z|00044|ofproto_dpif|INFO|netdev#ovs-netdev: Datapath supports timeout policy in conntrack action
2020-10-27T17:46:38.083Z|00045|ofproto_dpif|INFO|netdev#ovs-netdev: Datapath supports ct_state
2020-10-27T17:46:38.083Z|00046|ofproto_dpif|INFO|netdev#ovs-netdev: Datapath supports ct_zone
2020-10-27T17:46:38.083Z|00047|ofproto_dpif|INFO|netdev#ovs-netdev: Datapath supports ct_mark
2020-10-27T17:46:38.083Z|00048|ofproto_dpif|INFO|netdev#ovs-netdev: Datapath supports ct_label
2020-10-27T17:46:38.083Z|00049|ofproto_dpif|INFO|netdev#ovs-netdev: Datapath supports ct_state_nat
2020-10-27T17:46:38.084Z|00050|ofproto_dpif|INFO|netdev#ovs-netdev: Datapath supports ct_orig_tuple
2020-10-27T17:46:38.084Z|00051|ofproto_dpif|INFO|netdev#ovs-netdev: Datapath supports ct_orig_tuple6
2020-10-27T17:46:38.084Z|00052|ofproto_dpif|INFO|netdev#ovs-netdev: Datapath supports IPv6 ND Extensions
2020-10-27T17:46:38.090Z|00053|bridge|INFO|bridge br0: added interface br0 on port 65534
2020-10-27T17:46:38.090Z|00054|netdev_dpdk|WARN|Failed to enable flow control on device 0
2020-10-27T17:46:38.099Z|00055|dpif_netdev|INFO|PMD thread on numa_id: 1, core id: 5 created.
2020-10-27T17:46:38.107Z|00056|dpif_netdev|INFO|PMD thread on numa_id: 0, core id: 2 created.
2020-10-27T17:46:38.107Z|00057|dpif_netdev|INFO|There are 1 pmd threads on numa node 1
2020-10-27T17:46:38.107Z|00058|dpif_netdev|INFO|There are 1 pmd threads on numa node 0
2020-10-27T17:46:38.107Z|00059|dpdk|INFO|Device with port_id=0 already stopped
2020-10-27T17:46:38.382Z|00060|netdev_dpdk|INFO|Port 0: ec:f4:bb:e2:f4:94
2020-10-27T17:46:38.382Z|00061|dpif_netdev|INFO|Core 2 on numa node 0 assigned port 'dpdk-p0' rx queue 0 (measured processing cycles 0).
2020-10-27T17:46:38.383Z|00062|bridge|INFO|bridge br0: added interface dpdk-p0 on port 1
2020-10-27T17:46:38.385Z|00063|dpdk|INFO|VHOST_CONFIG: Linear buffers requested without external buffers, disabling host segmentation offloading support
2020-10-27T17:46:38.390Z|00064|dpdk|INFO|VHOST_CONFIG: vhost-user client: socket created, fd: 1091
2020-10-27T17:46:38.390Z|00065|netdev_dpdk|INFO|vHost User device 'vhost-vm-2004-tpl' created in 'client' mode, using client socket '/var/run/openvswitch-vhost/vhost-vm-2004-tpl'
2020-10-27T17:46:38.394Z|00066|dpdk|WARN|VHOST_CONFIG: failed to connect to /var/run/openvswitch-vhost/vhost-vm-2004-tpl: No such file or directory
2020-10-27T17:46:38.394Z|00067|dpdk|INFO|VHOST_CONFIG: /var/run/openvswitch-vhost/vhost-vm-2004-tpl: reconnecting...
2020-10-27T17:46:38.538Z|00068|dpif_netdev|INFO|Core 2 on numa node 0 assigned port 'dpdk-p0' rx queue 0 (measured processing cycles 0).
2020-10-27T17:46:38.538Z|00069|dpif_netdev|INFO|Core 5 on numa node 1 assigned port 'vhost-vm-2004-tpl' rx queue 0 (measured processing cycles 0).
2020-10-27T17:46:38.538Z|00070|bridge|INFO|bridge br0: added interface vhost-vm-2004-tpl on port 2
2020-10-27T17:46:38.538Z|00071|bridge|INFO|bridge br0: using datapath ID 0000ecf4bbe2f494
2020-10-27T17:46:38.539Z|00072|connmgr|INFO|br0: added service controller "punix:/usr/local/var/run/openvswitch/br0.mgmt"
2020-10-27T17:46:38.539Z|00073|timeval|WARN|Unreasonably long 1554ms poll interval (361ms user, 789ms system)
2020-10-27T17:46:38.539Z|00074|timeval|WARN|faults: 36263 minor, 0 major
2020-10-27T17:46:38.539Z|00075|timeval|WARN|disk: 0 reads, 24 writes
2020-10-27T17:46:38.539Z|00076|timeval|WARN|context switches: 857 voluntary, 1425 involuntary
2020-10-27T17:46:38.539Z|00077|coverage|INFO|Event coverage, avg rate over last: 5 seconds, last minute, last hour, hash=072f9aca:
2020-10-27T17:46:38.539Z|00078|coverage|INFO|bridge_reconfigure 0.0/sec 0.000/sec 0.0000/sec total: 1
2020-10-27T17:46:38.540Z|00079|coverage|INFO|ofproto_flush 0.0/sec 0.000/sec 0.0000/sec total: 1
2020-10-27T17:46:38.540Z|00080|coverage|INFO|ofproto_update_port 0.0/sec 0.000/sec 0.0000/sec total: 3
2020-10-27T17:46:38.540Z|00081|coverage|INFO|rev_flow_table 0.0/sec 0.000/sec 0.0000/sec total: 1
2020-10-27T17:46:38.540Z|00082|coverage|INFO|cmap_expand 0.0/sec 0.000/sec 0.0000/sec total: 44
2020-10-27T17:46:38.540Z|00083|coverage|INFO|cmap_shrink 0.0/sec 0.000/sec 0.0000/sec total: 25
2020-10-27T17:46:38.540Z|00084|coverage|INFO|datapath_drop_upcall_error 0.0/sec 0.000/sec 0.0000/sec total: 2
2020-10-27T17:46:38.540Z|00085|coverage|INFO|dpif_port_add 0.0/sec 0.000/sec 0.0000/sec total: 3
2020-10-27T17:46:38.540Z|00086|coverage|INFO|dpif_flow_flush 0.0/sec 0.000/sec 0.0000/sec total: 1
2020-10-27T17:46:38.540Z|00087|coverage|INFO|dpif_flow_get 0.0/sec 0.000/sec 0.0000/sec total: 23
2020-10-27T17:46:38.540Z|00088|coverage|INFO|dpif_flow_put 0.0/sec 0.000/sec 0.0000/sec total: 24
2020-10-27T17:46:38.540Z|00089|coverage|INFO|dpif_flow_del 0.0/sec 0.000/sec 0.0000/sec total: 23
2020-10-27T17:46:38.540Z|00090|coverage|INFO|dpif_execute 0.0/sec 0.000/sec 0.0000/sec total: 6
2020-10-27T17:46:38.540Z|00091|coverage|INFO|flow_extract 0.0/sec 0.000/sec 0.0000/sec total: 4
2020-10-27T17:46:38.540Z|00092|coverage|INFO|miniflow_malloc 0.0/sec 0.000/sec 0.0000/sec total: 35
2020-10-27T17:46:38.540Z|00093|coverage|INFO|hmap_pathological 0.0/sec 0.000/sec 0.0000/sec total: 4
2020-10-27T17:46:38.540Z|00094|coverage|INFO|hmap_expand 0.0/sec 0.000/sec 0.0000/sec total: 492
2020-10-27T17:46:38.540Z|00095|coverage|INFO|hmap_shrink 0.0/sec 0.000/sec 0.0000/sec total: 2
2020-10-27T17:46:38.540Z|00096|coverage|INFO|netdev_received 0.0/sec 0.000/sec 0.0000/sec total: 1
2020-10-27T17:46:38.540Z|00097|coverage|INFO|netdev_get_stats 0.0/sec 0.000/sec 0.0000/sec total: 3
2020-10-27T17:46:38.540Z|00098|coverage|INFO|poll_create_node 0.0/sec 0.000/sec 0.0000/sec total: 30
2020-10-27T17:46:38.540Z|00099|coverage|INFO|poll_zero_timeout 0.0/sec 0.000/sec 0.0000/sec total: 1
2020-10-27T17:46:38.540Z|00100|coverage|INFO|seq_change 0.0/sec 0.000/sec 0.0000/sec total: 137
2020-10-27T17:46:38.540Z|00101|coverage|INFO|pstream_open 0.0/sec 0.000/sec 0.0000/sec total: 3
2020-10-27T17:46:38.540Z|00102|coverage|INFO|stream_open 0.0/sec 0.000/sec 0.0000/sec total: 1
2020-10-27T17:46:38.540Z|00103|coverage|INFO|util_xalloc 0.0/sec 0.000/sec 0.0000/sec total: 9631
2020-10-27T17:46:38.540Z|00104|coverage|INFO|netdev_set_policing 0.0/sec 0.000/sec 0.0000/sec total: 1
2020-10-27T17:46:38.540Z|00105|coverage|INFO|netdev_get_ethtool 0.0/sec 0.000/sec 0.0000/sec total: 2
2020-10-27T17:46:38.540Z|00106|coverage|INFO|netlink_received 0.0/sec 0.000/sec 0.0000/sec total: 87
2020-10-27T17:46:38.540Z|00107|coverage|INFO|netlink_recv_jumbo 0.0/sec 0.000/sec 0.0000/sec total: 19
2020-10-27T17:46:38.540Z|00108|coverage|INFO|netlink_sent 0.0/sec 0.000/sec 0.0000/sec total: 85
2020-10-27T17:46:38.540Z|00109|coverage|INFO|111 events never hit
2020-10-27T17:46:38.546Z|00110|netdev_dpdk|WARN|Failed to enable flow control on device 0
2020-10-27T17:46:38.547Z|00111|bridge|INFO|ovs-vswitchd (Open vSwitch) 2.14.90
2020-10-27T17:46:47.093Z|00112|memory|INFO|196052 kB peak resident set size after 10.1 seconds
2020-10-27T17:46:47.093Z|00113|memory|INFO|handlers:1 ports:3 revalidators:1 rules:5 udpif keys:2
2020-10-27T17:46:58.392Z|00001|dpdk|INFO|VHOST_CONFIG: /var/run/openvswitch-vhost/vhost-vm-2004-tpl: connected
2020-10-27T17:46:58.392Z|00002|dpdk|INFO|VHOST_CONFIG: new device, handle is 0
2020-10-27T17:46:58.396Z|00001|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
2020-10-27T17:46:58.396Z|00002|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_GET_PROTOCOL_FEATURES
2020-10-27T17:46:58.396Z|00003|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_SET_PROTOCOL_FEATURES
2020-10-27T17:46:58.396Z|00004|dpdk|INFO|VHOST_CONFIG: negotiated Vhost-user protocol features: 0xcb7
2020-10-27T17:46:58.396Z|00005|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_GET_QUEUE_NUM
2020-10-27T17:46:58.396Z|00006|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_SET_SLAVE_REQ_FD
2020-10-27T17:46:58.396Z|00007|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_SET_OWNER
2020-10-27T17:46:58.396Z|00008|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
2020-10-27T17:46:58.396Z|00009|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
2020-10-27T17:46:58.396Z|00010|dpdk|INFO|VHOST_CONFIG: vring call idx:0 file:1100
2020-10-27T17:46:58.396Z|00011|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
2020-10-27T17:46:58.396Z|00012|dpdk|INFO|VHOST_CONFIG: vring call idx:1 file:1101
2020-10-27T17:47:01.905Z|00013|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
2020-10-27T17:47:01.905Z|00014|dpdk|INFO|VHOST_CONFIG: set queue enable: 1 to qp idx: 0
2020-10-27T17:47:01.905Z|00015|netdev_dpdk|INFO|State of queue 0 ( tx_qid 0 ) of vhost device '/var/run/openvswitch-vhost/vhost-vm-2004-tpl' changed to 'enabled'
2020-10-27T17:47:01.905Z|00016|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
2020-10-27T17:47:01.905Z|00017|dpdk|INFO|VHOST_CONFIG: set queue enable: 1 to qp idx: 1
2020-10-27T17:47:01.905Z|00018|netdev_dpdk|INFO|State of queue 1 ( rx_qid 0 ) of vhost device '/var/run/openvswitch-vhost/vhost-vm-2004-tpl' changed to 'enabled'
2020-10-27T17:47:01.905Z|00019|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
2020-10-27T17:47:01.905Z|00020|dpdk|INFO|VHOST_CONFIG: set queue enable: 1 to qp idx: 0
2020-10-27T17:47:01.905Z|00021|netdev_dpdk|INFO|State of queue 0 ( tx_qid 0 ) of vhost device '/var/run/openvswitch-vhost/vhost-vm-2004-tpl' changed to 'enabled'
2020-10-27T17:47:01.905Z|00022|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
2020-10-27T17:47:01.905Z|00023|dpdk|INFO|VHOST_CONFIG: set queue enable: 1 to qp idx: 1
2020-10-27T17:47:01.905Z|00024|netdev_dpdk|INFO|State of queue 1 ( rx_qid 0 ) of vhost device '/var/run/openvswitch-vhost/vhost-vm-2004-tpl' changed to 'enabled'
2020-10-27T17:47:01.908Z|00025|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_SET_FEATURES
2020-10-27T17:47:01.908Z|00026|dpdk|INFO|VHOST_CONFIG: negotiated Virtio features: 0x17020a782
2020-10-27T17:47:50.172Z|00001|dpif_netdev(revalidator6)|ERR|internal error parsing flow key skb_priority(0),skb_mark(0),ct_state(0),ct_zone(0),ct_mark(0),ct_label(0),recirc_id(0),dp_hash(0),in_port(2),packet_type(ns=0,id=0),eth(src=54:b2:03:14:d0:39,dst=01:00:5e:00:00:01),eth_type(0x0800),ipv4(src=0.0.0.0,dst=224.0.0.1,proto=2,tos=0xc0,ttl=1,frag=no)
2020-10-27T17:47:50.172Z|00002|dpif(revalidator6)|WARN|netdev#ovs-netdev: failed to put[modify] (Invalid argument) ufid:94f25b77-62c5-4859-aec3-e9a41c72dc3d skb_priority(0/0),skb_mark(0/0),ct_state(0/0),ct_zone(0/0),ct_mark(0/0),ct_label(0/0),recirc_id(0),dp_hash(0/0),in_port(2),packet_type(ns=0,id=0),eth(src=54:b2:03:14:d0:39,dst=01:00:5e:00:00:01),eth_type(0x0800),ipv4(src=0.0.0.0/0.0.0.0,dst=224.0.0.1/0.0.0.0,proto=2/0,tos=0xc0/0,ttl=1/0,frag=no), actions:userspace(pid=0,slow_path(match))
2020-10-27T17:47:52.680Z|00003|dpif_netdev(revalidator6)|ERR|internal error parsing flow key skb_priority(0),skb_mark(0),ct_state(0),ct_zone(0),ct_mark(0),ct_label(0),recirc_id(0),dp_hash(0),in_port(2),packet_type(ns=0,id=0),eth(src=44:2c:05:ce:8d:03,dst=01:00:5e:7f:ff:fa),eth_type(0x0800),ipv4(src=192.168.27.150,dst=239.255.255.250,proto=2,tos=0xc0,ttl=1,frag=no)
2020-10-27T17:47:52.680Z|00004|dpif(revalidator6)|WARN|netdev#ovs-netdev: failed to put[modify] (Invalid argument) ufid:aff312f9-4416-49e4-a314-9f895aa96de1 skb_priority(0/0),skb_mark(0/0),ct_state(0/0),ct_zone(0/0),ct_mark(0/0),ct_label(0/0),recirc_id(0),dp_hash(0/0),in_port(2),packet_type(ns=0,id=0),eth(src=44:2c:05:ce:8d:03,dst=01:00:5e:7f:ff:fa),eth_type(0x0800),ipv4(src=192.168.27.150/0.0.0.0,dst=239.255.255.250/0.0.0.0,proto=2/0,tos=0xc0/0,ttl=1/0,frag=no), actions:userspace(pid=0,slow_path(match))
2020-10-27T17:47:55.009Z|00005|dpif_netdev(revalidator6)|ERR|internal error parsing flow key skb_priority(0),skb_mark(0),ct_state(0),ct_zone(0),ct_mark(0),ct_label(0),recirc_id(0),dp_hash(0),in_port(2),packet_type(ns=0,id=0),eth(src=00:02:c9:50:8a:f0,dst=01:00:5e:00:00:fb),eth_type(0x0800),ipv4(src=192.168.27.232,dst=224.0.0.251,proto=2,tos=0,ttl=1,frag=no)
2020-10-27T17:47:55.009Z|00006|dpif_netdev(revalidator6)|ERR|internal error parsing flow key skb_priority(0),skb_mark(0),ct_state(0),ct_zone(0),ct_mark(0),ct_label(0),recirc_id(0),dp_hash(0),in_port(2),packet_type(ns=0,id=0),eth(src=56:ed:b8:d2:f1:e3,dst=01:00:5e:00:00:6a),eth_type(0x0800),ipv4(src=192.168.27.101,dst=224.0.0.106,proto=2,tos=0xc0,ttl=1,frag=no)
2020-10-27T17:47:55.009Z|00007|dpif(revalidator6)|WARN|netdev#ovs-netdev: failed to put[modify] (Invalid argument) ufid:b108050d-511e-447d-8837-35af4af81c4e skb_priority(0/0),skb_mark(0/0),ct_state(0/0),ct_zone(0/0),ct_mark(0/0),ct_label(0/0),recirc_id(0),dp_hash(0/0),in_port(2),packet_type(ns=0,id=0),eth(src=00:02:c9:50:8a:f0,dst=01:00:5e:00:00:fb),eth_type(0x0800),ipv4(src=192.168.27.232/0.0.0.0,dst=224.0.0.251/0.0.0.0,proto=2/0,tos=0/0,ttl=1/0,frag=no), actions:userspace(pid=0,slow_path(match))
2020-10-27T17:47:55.009Z|00008|dpif(revalidator6)|WARN|netdev#ovs-netdev: failed to put[modify] (Invalid argument) ufid:02feab90-66a5-484c-bbc5-8e97985d1f73 skb_priority(0/0),skb_mark(0/0),ct_state(0/0),ct_zone(0/0),ct_mark(0/0),ct_label(0/0),recirc_id(0),dp_hash(0/0),in_port(2),packet_type(ns=0,id=0),eth(src=56:ed:b8:d2:f1:e3,dst=01:00:5e:00:00:6a),eth_type(0x0800),ipv4(src=192.168.27.101/0.0.0.0,dst=224.0.0.106/0.0.0.0,proto=2/0,tos=0xc0/0,ttl=1/0,frag=no), actions:userspace(pid=0,slow_path(match))
2020-10-27T17:47:56.014Z|00009|dpif_netdev(revalidator6)|ERR|internal error parsing flow key skb_priority(0),skb_mark(0),ct_state(0),ct_zone(0),ct_mark(0),ct_label(0),recirc_id(0),dp_hash(0),in_port(2),packet_type(ns=0,id=0),eth(src=00:02:c9:50:8a:f0,dst=01:00:5e:00:00:fc),eth_type(0x0800),ipv4(src=192.168.27.232,dst=224.0.0.252,proto=2,tos=0,ttl=1,frag=no)
2020-10-27T17:47:56.014Z|00010|dpif(revalidator6)|WARN|netdev#ovs-netdev: failed to put[modify] (Invalid argument) ufid:22a5115c-c730-42b2-a590-87b999192781 skb_priority(0/0),skb_mark(0/0),ct_state(0/0),ct_zone(0/0),ct_mark(0/0),ct_label(0/0),recirc_id(0),dp_hash(0/0),in_port(2),packet_type(ns=0,id=0),eth(src=00:02:c9:50:8a:f0,dst=01:00:5e:00:00:fc),eth_type(0x0800),ipv4(src=192.168.27.232/0.0.0.0,dst=224.0.0.252/0.0.0.0,proto=2/0,tos=0/0,ttl=1/0,frag=no), actions:userspace(pid=0,slow_path(match))

Related

how can I config virtio-net-pci to emulate a big endian linux for qemu-system-aarch64 running at a little endian

I emulate big endian linux by qemu-system-aarch64 with '-device virtio-net-pci' running at a little endian, and get the following error when i run dpdk l3fwd example.
#./examples/dpdk-l3fwd --log-level=pmd,8 -l 0 -- -p 0xf -L --config="(0,0,0)" --parse-ptype
EAL: Detected CPU lcores: 8
EAL: Detected NUMA nodes: 1
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: Probe PCI driver: net_virtio (1af4:1000) device: 0000:00:02.0 (socket 0)
[ 150.096996] igb_uio 0000:00:02.0: uio device registered with irq 44
virtio_read_caps(): [98] skipping non VNDR cap id: 11
virtio_read_caps(): [84] cfg type: 5, bar: 0, offset: 0000, len: 0
virtio_read_caps(): [70] cfg type: 2, bar: 4, offset: 300000, len: 1048576
get_cfg_addr(): invalid cap: overflows bar space: 4194304 > 16384
virtio_read_caps(): [60] cfg type: 4, bar: 4, offset: 200000, len: 1048576
get_cfg_addr(): invalid cap: overflows bar space: 3145728 > 16384
virtio_read_caps(): [50] cfg type: 3, bar: 4, offset: 100000, len: 1048576
get_cfg_addr(): invalid cap: overflows bar space: 2097152 > 16384
virtio_read_caps(): [40] cfg type: 1, bar: 4, offset: 0000, len: 1048576
get_cfg_addr(): invalid cap: overflows bar space: 1048576 > 16384
virtio_read_caps(): no modern virtio pci device found.
vtpci_init(): trying with legacy virtio pci.
EAL: Cannot mmap IO port resource: No such device
eth_virtio_pci_init(): Failed to init PCI device
EAL: Requested device 0000:00:02.0 cannot be used
EAL: Probe PCI driver: net_virtio (1af4:1000) device: 0000:00:03.0 (socket 0)
virtio_read_caps(): failed to map pci device!
vtpci_init(): trying with legacy virtio pci.
vtpci_init(): skip kernel managed virtio device.
eth_virtio_pci_init(): Failed to init PCI device
EAL: Requested device 0000:00:03.0 cannot be used
TELEMETRY: No legacy callbacks, legacy socket not created
soft parse-ptype is enabled
L3FWD: Missing 1 or more rule files, using default instead
port 0 is not present on the board
EAL: Error - exiting with code: 1
Cause: check_port_config failed
I find that it read config with following code in function virtio_read_caps
ret = rte_pci_read_config(pci_dev, &cap, sizeof(cap), pos);
if (ret != sizeof(cap)) {
PMD_INIT_LOG(DEBUG,
"failed to read pci cap at pos: %x ret %d",
pos, ret);
break;
}
with definition of virtio_pci_cap as follows,
struct virtio_pci_cap {
uint8_t cap_vndr; /* Generic PCI field: PCI_CAP_ID_VNDR */
uint8_t cap_next; /* Generic PCI field: next ptr. */
uint8_t cap_len; /* Generic PCI field: capability length */
uint8_t cfg_type; /* Identifies the structure. */
uint8_t bar; /* Where to find it. */
uint8_t padding[3]; /* Pad to full dword. */
uint32_t offset; /* Offset within bar. */
uint32_t length; /* Length of the structure, in bytes. */
};
so the offset and length is big-endian. but in virtio-v1.1-cs01 section 2.4, i get
Note: The device configuration space uses the little-endian format for multi-byte fields.
I guess that causes the problem, but there's no further information when i google it. It confuses me. Is it true that dpdk net/virtio driver does't support big-endian?
there is option listed in QEMU for changing the endianness. Please refer
VHOST_USER_SET_VRING_ENDIAN
id: 23
equivalent ioctl:
VHOST_SET_VRING_ENDIAN
master payload: vring state description
Set the endianness of a VQ for legacy devices. Little-endian is indicated with state.num set to 0 and big-endian is indicated with state.num set to 1. Other values are invalid.
This request should be sent only when VHOST_USER_PROTOCOL_F_CROSS_ENDIAN has been negotiated. Backends that negotiated this feature should handle both endiannesses and expect this message once (per VQ) during device configuration (ie. before the master starts the VQ).
Note: I am not aware of the setting to be passed in KVM-QEMU (libxml) achieve the same.

No symbolication for crash files with Xcode 7.3.1

I am not getting a symbolicated crash file using Xcode 7.3.1. My current version of the app never has symbols, however an older version of the app seems OK and the crash file is symbolicated.
I have tried to manually re-symbolicate by dragging it onto a device as described in this SO answer.
I tried to manually use the symbolicatecrash utility as described by this SO Answer.
I have confirmed that the dSYM file exists in the archive and am using it in both of the above manual attempts to rebuild the symbols. Any idea what I have missed?
Some of the crash files we received from our customers are corrupt. Using Apple's instructions Getting Crash Logs Directly From a Device Without Xcode, the customer copied the crash log and pasted it into email.
Something along the way corrupted the crash file though, injecting \n characters in somewhat random spots. We manually fixed the corrupted crash file by comparing it to an example from our system and symbolication worked.
Note the incorrect new line characters in the corrupted examples below:
Corrupt:
Thread 0 name: Dispatch queue: com.apple.main-thread
Thread 0:
0 libsystem_kernel.dylib
0x0000000185535188 0x185534000 + 4488
1 libsystem_kernel.dylib
0x0000000185534ff8 0x185534000 + 4088
2 CoreFoundation
0x00000001865325d0 0x186455000 + 906704
Should be:
Thread 0 name: Dispatch queue: com.apple.main-thread
Thread 0:
0 libsystem_kernel.dylib 0x0000000185535188 0x185534000 + 4488
1 libsystem_kernel.dylib 0x0000000185534ff8 0x185534000 + 4088
2 CoreFoundation 0x00000001865325d0 0x186455000 + 906704
Corrupt:
Thread 12 crashed with ARM Thread State (64-bit):
x0: 0x0000000109020010 x1: 0x0000000109020020 x2: 0x0000000104f5c000 x3:
0xffffffffffff63ff
x4: 0x0000000000000001 x5: 0x0000000000000001 x6: 0x0000000108f84010 x7:
0x0000000000000000
x8: 0x0000200000000000 x9: 0x0000000000000000 x10: 0x0000000000000002 x11:
0x0000000174c4bb28
Should be:
Thread 12 crashed with ARM Thread State (64-bit):
x0: 0x0000000109020010 x1: 0x0000000109020020 x2: 0x0000000104f5c000 x3: 0xffffffffffff63ff
x4: 0x0000000000000001 x5: 0x0000000000000001 x6: 0x0000000108f84010 x7: 0x0000000000000000
x8: 0x0000200000000000 x9: 0x0000000000000000 x10: 0x0000000000000002 x11: 0x0000000174c4bb28
Corrupt:
Binary Images:
0x100910000 - 0x10093ffff dyld arm64 <f54ed85a94253887886a8028e20ed8ba> /usr/lib/dyld
0x188638000 - 0x188639fff libSystem.B.dylib arm64 <1b4d75209f4a37969a9575de48d48668>
/usr/lib/libSystem.B.dylib
0x18863a000 - 0x18868ffff libc++.1.dylib arm64 <b2db8b1d09283b7bafe1b2933adc5dfd>
/usr/lib/libc++.1.dylib
Should be:
Binary Images:
0x100910000 - 0x10093ffff dyld arm64 <f54ed85a94253887886a8028e20ed8ba> /usr/lib/dyld
0x188638000 - 0x188639fff libSystem.B.dylib arm64 <1b4d75209f4a37969a9575de48d48668> /usr/lib/libSystem.B.dylib
0x18863a000 - 0x18868ffff libc++.1.dylib arm64 <b2db8b1d09283b7bafe1b2933adc5dfd> /usr/lib/libc++.1.dylib

Intel Turbo boost for KVM Guest

Please excuse me if this is a stupid question, however, I'm curious why I'm not seeing any clock speed differences when using fedora as a guest VM via KVM/QEMU.
Perhaps this is a rather dumb way of concluding such things, but when I do cat /proc/cpuinfo | grep MHz it's always the same and it's the base clock speed advertised by my xeons.
Is there some option I have to pass virsh to enable turbo boost?
This might be helpful:
[jflowers#console ~]$ sudo lshw -class processor*-cpu
description: CPU
product: Intel(R) Xeon(R) CPU E5-2630 v4 # 2.20GHz
vendor: Intel Corp.
physical id: 400
bus info: cpu#0
version: pc-q35-2.3
slot: CPU 0
size: 2GHz
capacity: 2GHz
width: 64 bits
capabilities: fpu fpu_exception wp vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp x86-64 constant_tsc arch_perfmon rep_good nopl eagerfpu pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt
configuration: cores=30 enabledcores=30 threads=1*-processor UNCLAIMED
description: SCSI Processor
product: Console
vendor: Marvell
physical id: 0.0.0
bus info: scsi#7:0.0.0
version: 1.01
capabilities: removable
configuration: ansiversion=5
And a different utility:
[jflowers#console ~]$ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 30
On-line CPU(s) list: 0-29
Thread(s) per core: 1
Core(s) per socket: 30
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 79
Model name: Intel(R) Xeon(R) CPU E5-2630 v4 # 2.20GHz
Stepping: 1
CPU MHz: 2195.304
BogoMIPS: 4390.60
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 25600K
NUMA node0 CPU(s): 0-29
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl eagerfpu pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt
The speed shown in /proc/cpuinfo inside a guest is meaningless/irrelevant. Regardless of what it says there, the CPUs will run at whatever speed the host CPUs support. IOW, if your guest is doing something CPU intensive, you can rest assured it'll max out the host CPU causing the host kernel to ramp up host CPU speed to maximum possible when needed.

Unable to use JMockit with OpenJDK 1.7

While trying to use JMockit (1.21) with JUnit (4.8) test cases I ran into issue with OpenJDK (1.7). I'm using Eclipse. After searching on SO, I found the solution about adding '-javaagent:path/to/JMockit/jar' argument to JVM and putting JMockit dependency before JUnit in maven. But after adding that argument, my test won't run and instead I get following error. Did anyone have this issue and how did you solve it? It works if I use OracleJDK but I'm looking for solution where it runs with OpenJDK.
#
# A fatal error has been detected by the Java Runtime Environment:
#
# EXCEPTION_ACCESS_VIOLATION (0xc0000005) at pc=0x0000000051cfbbe8, pid=9268, tid=2272
#
# JRE version: OpenJDK Runtime Environment (7.0) (build 1.7.0-45-asmurthy_2014_01_10_19_46-b00)
# Java VM: Dynamic Code Evolution 64-Bit Server VM (24.45-b06 mixed mode windows-amd64 compressed oops)
# Problematic frame:
# V [jvm.dll+0x6bbe8]
#
# Failed to write core dump. Minidumps are not enabled by default on client versions of Windows
#
# If you would like to submit a bug report, please visit:
# http://bugreport.sun.com/bugreport/crash.jsp
#
--------------- T H R E A D ---------------
Current thread (0x000000000270c000): VMThread [stack: 0x00000000074e0000,0x00000000075e0000] [id=2272]
siginfo: ExceptionCode=0xc0000005, reading address 0x0000000000000000
Registers:
RAX=0x0000000000000000, RBX=0x0000000000000000, RCX=0x0000000000000000, RDX=0x00000007fae04720
RSP=0x00000000075df190, RBP=0x00000000523a16d8, RSI=0x0000000002abe2b0, RDI=0x00000000523a16e0
R8 =0x0000000000000000, R9 =0x0000000000000100, R10=0x0000000000041999, R11=0x0000000008eab1f0
R12=0x000000000270c000, R13=0x00000007fb208040, R14=0x0000000000000001, R15=0x00000000000003d8
RIP=0x0000000051cfbbe8, EFLAGS=0x0000000000010206
Top of Stack: (sp=0x00000000075df190)
0x00000000075df190: 00000007fb208050 0000000002abe200
0x00000000075df1a0: 00000007fb208040 000000000270c000
0x00000000075df1b0: 00000000523a16e0 0000000051e0ecbc
0x00000000075df1c0: 0000000000000000 00000000523a16d8
0x00000000075df1d0: 0000000002abe2b0 000000000270c000
0x00000000075df1e0: 00000000521ea7b8 0000000052450100
0x00000000075df1f0: 0000000000000000 00000000521ea7a0
0x00000000075df200: 0000000000001000 00000000075df1e0
0x00000000075df210: 0000000000000100 0000000000000000
0x00000000075df220: 00000000073158d8 00000000000003d8
0x00000000075df230: 00000000073158d8 0000000002701ac0
0x00000000075df240: 00000000073154f0 0000000051e74dc7
0x00000000075df250: 00000000026c3de0 0000000000000001
0x00000000075df260: 0000000002abe2b0 0000000007315500
0x00000000075df270: 0000000007315500 00000000073154f0
0x00000000075df280: 0000000002701ac0 0000000051e74072
Instructions: (pc=0x0000000051cfbbe8)
0x0000000051cfbbc8: cc cc cc cc cc cc cc cc 48 89 5c 24 08 57 48 83
0x0000000051cfbbd8: ec 20 48 8b 05 17 20 69 00 48 8b 0d c0 e5 68 00
0x0000000051cfbbe8: 48 63 18 e8 60 c3 fa ff 33 ff 48 85 db 7e 37 66
0x0000000051cfbbf8: 0f 1f 84 00 00 00 00 00 48 8b 05 f1 1f 69 00 48
Register to memory mapping:
RAX=0x0000000000000000 is an unknown value
RBX=0x0000000000000000 is an unknown value
RCX=0x0000000000000000 is an unknown value
RDX=0x00000007fae04720 is an oop
{instance class}
- klass: {other class}
RSP=0x00000000075df190 is an unknown value
RBP=0x00000000523a16d8 is an unknown value
RSI=0x0000000002abe2b0 is pointing into the stack for thread: 0x00000000026c7000
RDI=0x00000000523a16e0 is an unknown value
R8 =0x0000000000000000 is an unknown value
R9 =0x0000000000000100 is an unknown value
R10=0x0000000000041999 is an unknown value
R11=0x0000000008eab1f0 is an unknown value
R12=0x000000000270c000 is an unknown value
R13=0x00000007fb208040 is an oop
{instance class}
- klass: {other class}
R14=0x0000000000000001 is an unknown value
R15=0x00000000000003d8 is an unknown value
Stack: [0x00000000074e0000,0x00000000075e0000], sp=0x00000000075df190, free space=1020k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
V [jvm.dll+0x6bbe8]
VM_Operation (0x0000000002abe2b0): RedefineClasses, mode: safepoint, requested by thread 0x00000000026c7000
--------------- P R O C E S S ---------------
Java Threads: ( => current thread )
0x000000000739c800 JavaThread "Attach Listener" daemon [_thread_blocked, id=6668, stack(0x0000000007c60000,0x0000000007d60000)]
0x000000000739b000 JavaThread "Signal Dispatcher" daemon [_thread_blocked, id=5620, stack(0x0000000007aa0000,0x0000000007ba0000)]
0x0000000007379800 JavaThread "Finalizer" daemon [_thread_blocked, id=9832, stack(0x00000000078c0000,0x00000000079c0000)]
0x0000000007370000 JavaThread "Reference Handler" daemon [_thread_blocked, id=7516, stack(0x0000000007680000,0x0000000007780000)]
0x00000000026c7000 JavaThread "main" [_thread_blocked, id=6580, stack(0x00000000029c0000,0x0000000002ac0000)]
Other Threads:
=>0x000000000270c000 VMThread [stack: 0x00000000074e0000,0x00000000075e0000] [id=2272]
VM state:at safepoint (normal execution)
VM Mutex/Monitor currently owned by a thread: ([mutex/lock_event])
[0x00000000026c3e60] Threads_lock - owner thread: 0x000000000270c000
[0x00000000026c4360] Heap_lock - owner thread: 0x00000000026c7000
[0x00000000026c4b60] RedefineClasses_lock - owner thread: 0x00000000026c7000
Heap
def new generation total 118016K, used 21035K [0x000000067ae00000, 0x0000000682e00000, 0x00000006fae00000)
eden space 104960K, 20% used [0x000000067ae00000, 0x000000067c28af18, 0x0000000681480000)
from space 13056K, 0% used [0x0000000681480000, 0x0000000681480000, 0x0000000682140000)
to space 13056K, 0% used [0x0000000682140000, 0x0000000682140000, 0x0000000682e00000)
tenured generation total 262144K, used 0K [0x00000006fae00000, 0x000000070ae00000, 0x00000007fae00000)
the space 262144K, 0% used [0x00000006fae00000, 0x00000006fae00000, 0x00000006fae00200, 0x000000070ae00000)
compacting perm gen total 21248K, used 4129K [0x00000007fae00000, 0x00000007fc2c0000, 0x0000000800000000)
the space 21248K, 19% used [0x00000007fae00000, 0x00000007fb208478, 0x00000007fb208600, 0x00000007fc2c0000)
No shared spaces configured.
Card table byte_map: [0x0000000005e60000,0x0000000006a90000] byte_map_base: 0x0000000002a89000
Polling page: 0x0000000000150000
Code Cache [0x0000000002da0000, 0x0000000003010000, 0x0000000005da0000)
total_blobs=187 nmethods=0 adapters=156 free_code_cache=48761Kb largest_free_block=49932032
Compilation events (0 events):
No events
GC Heap History (0 events):
No events
Deoptimization events (0 events):
No events
Internal exceptions (10 events):
Event: 1.210 Thread 0x00000000026c7000 Threw 0x000000067c036fa0 at C:\openjdk\jdk7u\hotspot\src\share\vm\interpreter\interpreterRuntime.cpp:347
Event: 1.211 Thread 0x00000000026c7000 Threw 0x000000067c03ae78 at C:\openjdk\jdk7u\hotspot\src\share\vm\prims\jvm.cpp:1244
Event: 1.212 Thread 0x00000000026c7000 Threw 0x000000067c045f10 at C:\openjdk\jdk7u\hotspot\src\share\vm\prims\jvm.cpp:1244
Event: 1.213 Thread 0x00000000026c7000 Threw 0x000000067c057398 at C:\openjdk\jdk7u\hotspot\src\share\vm\prims\jvm.cpp:1244
Event: 1.214 Thread 0x00000000026c7000 Threw 0x000000067c066510 at C:\openjdk\jdk7u\hotspot\src\share\vm\prims\jvm.cpp:1244
Event: 1.215 Thread 0x00000000026c7000 Threw 0x000000067c072ac0 at C:\openjdk\jdk7u\hotspot\src\share\vm\prims\jvm.cpp:1244
Event: 1.216 Thread 0x00000000026c7000 Threw 0x000000067c084678 at C:\openjdk\jdk7u\hotspot\src\share\vm\prims\jvm.cpp:1244
Event: 1.217 Thread 0x00000000026c7000 Threw 0x000000067c08c7b0 at C:\openjdk\jdk7u\hotspot\src\share\vm\prims\jvm.cpp:1244
Event: 1.218 Thread 0x00000000026c7000 Threw 0x000000067c0934a8 at C:\openjdk\jdk7u\hotspot\src\share\vm\prims\jvm.cpp:1244
Event: 1.218 Thread 0x00000000026c7000 Threw 0x000000067c09cac8 at C:\openjdk\jdk7u\hotspot\src\share\vm\prims\jvm.cpp:1244
Events (10 events):
Event: 1.215 loading class 0x0000000008956520 done
Event: 1.216 loading class 0x0000000008959de0
Event: 1.216 loading class 0x0000000008959de0 done
Event: 1.217 loading class 0x0000000008ea91d0
Event: 1.217 loading class 0x0000000008ea91d0 done
Event: 1.218 loading class 0x0000000008d643e0
Event: 1.218 loading class 0x0000000008d643e0 done
Event: 1.218 loading class 0x0000000008958d20
Event: 1.218 loading class 0x0000000008958d20 done
Event: 1.219 Executing VM operation: RedefineClasses
Dynamic libraries:
0x000000013f7c0000 - 0x000000013f7f1000 S:\OpenJDK\bin\javaw.exe
0x0000000076ef0000 - 0x0000000077099000 C:\WINDOWS\SYSTEM32\ntdll.dll
0x0000000076cb0000 - 0x0000000076dcf000 C:\WINDOWS\system32\kernel32.dll
0x000007fefcde0000 - 0x000007fefce4b000 C:\WINDOWS\system32\KERNELBASE.dll
0x0000000074990000 - 0x0000000074a19000 C:\WINDOWS\System32\SYSFER.DLL
0x000007feff0f0000 - 0x000007feff1cb000 C:\WINDOWS\system32\ADVAPI32.dll
0x000007fefed40000 - 0x000007fefeddf000 C:\WINDOWS\system32\msvcrt.dll
0x000007fefe660000 - 0x000007fefe67f000 C:\WINDOWS\SYSTEM32\sechost.dll
0x000007fefefc0000 - 0x000007feff0ed000 C:\WINDOWS\system32\RPCRT4.dll
0x0000000076df0000 - 0x0000000076eea000 C:\WINDOWS\system32\USER32.dll
0x000007fefe270000 - 0x000007fefe2d7000 C:\WINDOWS\system32\GDI32.dll
0x000007fefe360000 - 0x000007fefe36e000 C:\WINDOWS\system32\LPK.dll
0x000007fefe370000 - 0x000007fefe43a000 C:\WINDOWS\system32\USP10.dll
0x000007fefb460000 - 0x000007fefb654000 C:\WINDOWS\WinSxS\amd64_microsoft.windows.common-controls_6595b64144ccf1df_6.0.7601.18837_none_fa3b1e3d17594757\COMCTL32.dll
0x000007fefee30000 - 0x000007fefeea1000 C:\WINDOWS\system32\SHLWAPI.dll
0x000007fefe630000 - 0x000007fefe65e000 C:\WINDOWS\system32\IMM32.DLL
0x000007fefeeb0000 - 0x000007fefefb9000 C:\WINDOWS\system32\MSCTF.dll
0x0000000052420000 - 0x00000000524f2000 S:\OpenJDK\jre\bin\msvcr100.dll
0x0000000051c90000 - 0x000000005241e000 S:\OpenJDK\jre\bin\server\jvm.dll
0x000007fef82b0000 - 0x000007fef82b9000 C:\WINDOWS\system32\WSOCK32.dll
0x000007fefede0000 - 0x000007fefee2d000 C:\WINDOWS\system32\WS2_32.dll
0x000007feff1d0000 - 0x000007feff1d8000 C:\WINDOWS\system32\NSI.dll
0x000007fefaca0000 - 0x000007fefacdb000 C:\WINDOWS\system32\WINMM.dll
0x00000000770c0000 - 0x00000000770c7000 C:\WINDOWS\system32\PSAPI.DLL
0x000007feece20000 - 0x000007feece2f000 S:\OpenJDK\jre\bin\verify.dll
0x000007fee0320000 - 0x000007fee0348000 S:\OpenJDK\jre\bin\java.dll
0x000007fee6ee0000 - 0x000007fee6f03000 S:\OpenJDK\jre\bin\instrument.dll
0x000007fee06e0000 - 0x000007fee06f5000 S:\OpenJDK\jre\bin\zip.dll
0x000007fef1f90000 - 0x000007fef20b5000 C:\WINDOWS\system32\dbghelp.dll
VM Arguments:
jvm_args: -javaagent:S:\.m2\org\jmockit\jmockit\1.21\jmockit-1.21.jar -Dfile.encoding=ISO-8859-1
java_command: org.eclipse.jdt.internal.junit.runner.RemoteTestRunner -version 3 -port 61294 -testLoaderClass org.eclipse.jdt.internal.junit4.runner.JUnit4TestLoader -loaderpluginname org.eclipse.jdt.junit4.runtime -classNames com.examples.JMockitTest
java_command: org.eclipse.jdt.internal.junit.runner.RemoteTestRunner -version 3 -port 61294 -testLoaderClass org.eclipse.jdt.internal.junit4.runner.JUnit4TestLoader -loaderpluginname org.eclipse.jdt.junit4.runtime -classNames com.examples.JMockitTest
Launcher Type: SUN_STANDARD
Environment Variables:
JRE_HOME=C:\Program Files (x86)\IBM\RationalSDLC\Common\Java5.0\jre
PATH=C:\Python27\;C:\Python27\Scripts;C:\Program Files (x86)\IBM\RationalSDLC\Clearquest\cqcli\bin;C:\PERL51001\Perl\site\bin;C:\PERL51001\Perl\bin;C:\Program Files (x86)\RSA SecurID Token Common;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\Program Files (x86)\Microsoft Application Virtualization Client;C:\Program Files (x86)\java\jre6\bin\;C:\Perl64\bin;C:\Program Files (x86)\Perforce;C:\Program Files (x86)\IBM\RationalSDLC\ClearCase\bin;C:\Program Files (x86)\IBM\RationalSDLC\common;C:\WINDOWS\System32\WindowsPowerShell\v1.0\;C:\WINDOWS\System32\WindowsPowerShell\v1.0\;C:\Program Files\TortoiseGit\bin;C:\Program Files\Microsoft SQL Server\110\Tools\Binn\;C:\Program Files (x86)\Microsoft SDKs\TypeScript\1.0\;C:\Program Files\Microsoft SQL Server\120\Tools\Binn\;C:\Program Files\nodejs\
Thanks

Flask API server slow response time

I created an API server with Flask, I use gunicorn with eventlet to run it. I noticed a long response time from Flask server when calling APIs. I did a profiling with my client, one ran from my laptop, one ran directly in Flask API server.
From my laptop:
302556 function calls (295712 primitive calls) in 5.594 seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
72 4.370 0.061 4.370 0.061 {method 'poll' of 'select.epoll' objects}
16 0.374 0.023 0.374 0.023 {method 'connect' of '_socket.socket' objects}
16 0.213 0.013 0.213 0.013 {method 'load_verify_locations' of '_ssl._SSLContext' objects}
16 0.053 0.003 0.058 0.004 httplib.py:798(close)
52 0.034 0.001 0.034 0.001 {method 'do_handshake' of '_ssl._SSLSocket' objects}
On server:
231449 function calls (225936 primitive calls) in 3.320 seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
12 2.132 0.178 2.132 0.178 {built-in method read}
13 0.286 0.022 0.286 0.022 {method 'poll' of 'select.epoll' objects}
12 0.119 0.010 0.119 0.010 {_ssl.sslwrap}
12 0.095 0.008 0.095 0.008 {built-in method do_handshake}
855/222 0.043 0.000 0.116 0.001 sre_parse.py:379(_parse)
1758/218 0.029 0.000 0.090 0.000 sre_compile.py:32(_compile)
1013 0.027 0.000 0.041 0.000 sre_compile.py:207(_optimize_charset)
12429 0.023 0.000 0.029 0.000 sre_parse.py:182(__next)
So, I saw my client took long time to wait from server response base on the profile result.
I used gunicorn with eventlet to serve Flask app, with the flowing configuration:
import multiprocessing
bind = ['0.0.0.0:8000']
backlog = 2048
workers = multiprocessing.cpu_count() * 2 + 1
worker_class = 'eventlet'
user = 'www-data'
group = 'www-data'
loglevel = 'info'
My client is an custom HTTP client using eventlet to patch httplib2 and create a pool to connect to server.
I stuck here with the troubleshooting. All server stats were normal. How can I detect the bottle neck of my API server?