Home

Release Notes

image

Contents

1. eU 6 D SCORING ADONCANONS ES 6 2 Changed in Rev 6 9 1 from Rev 6 8 3 2 7 3 BUO FIXES 8 IKMOWNISSUCS 9 5 CHANGES Loo FISTO S 14 Appendix A aii 16 Ad Ethernet tel s 16 1 1 Benchmark Configuration with 3 16 A 1 2 UDP Unicast Throughput Netperf UDP STREAM Benchmark Using eub e 16 A 1 3 VMA TCP Unicast Throughput Netperf TCP STREAM Benchmark Using 16 A 1 4 UDP Unicast Latency Netperf UDP RR Benchmark with ConnectX 3 17 A 1 5 VMA TCP Latency Netperf RR Benchmark with ConnectXQ 3 18 AG 18 A 2 1 Benchma
2. SP1 Netcat with on SLES 11 SPI N A does not function Mellanox Technologies Confidential Mellanox Messaging Accelerator VMA Release Notes Version 6 9 1 Subject Description Workaround Issues with Sharing of HW resources between the performance with some different working threads might cause multi threaded lock contentions which can affect For best performance with VMA use multi processing each with a single thread This applications performance will best allocate and separate the HW resources between the working threads and minimize contention Segmentation fault on Known NetPIPE bug Netpipe is Upgrade to NetPIPE 3 7 or NetPIPE exit trying to access read only memory later When exiting VMA If VMA runs when Ignore the error message or logs errors when the VMA_HANDLE_SIGINTR 15 enabled run with VMA_HANDLE_SIGI an error message might be written upon VMA_HANDLE_SIGINTR NTR is enabled exiting disabled VMA ping pong VMA suffers from high latency in low Use VMA_RX_POLL 1 latency degradation as message rates PPS is lowered No support for direct VMA does not support broadcast Use libvma conf to pass broadcast traffic broadcast through OS There is no non valid Directing VMA to access non valid pointer handling in memory area will cause a segmentation VMA fault First connect send VMA allocates resources on the first N A operation might take connect send operation which migh
3. Verify that the number of max open FDs File Descriptors in the system ulimit n is twice as number of needed sockets VMA internal logic requires one additional FD per offloaded socket VMA supports fork if 1 is enabled and the Mellanox supported OFED stack is used In this case the child must not use any sockets created by the parent process Version 6 9 1 Known Issues Subject The VMA application does not exit when you press CTRL C Sockperf over VMA server re assign ip no traffic Packets loss occurs when running sockperf with max pps rate Description A Child process can continue running if the child does not access any sockets created by the Parent process When a VMA enabled application is running there are several cases when it does not exit as expected with CTRL C VMA does not support network interface or route changes during runtime The send rate is higher than the receive rate Therefore when running one sockperf server with one sockperf client there will be packets loss Mellanox Technologies Confidential Workaround General fork support from kernel 2 6 16 and later is available provided that applications do not use threads The fork function is supported provided that the parent process does not run before the child exits or calls exec e You ensure that the parent process does not run before the child exits by calling wait childp
4. issues in listen socket shutdown 6 6 4 6 7 2 4 Fixed issues that caused multithread deadlocks and races in the 6 5 9 6 6 4 system 5 Fixed wrong usage of route gateway information 6 5 9 6 6 4 6 Fixed buffer management issues and leaks 6 5 9 6 6 4 1 Fixed multicast loopback filtering on RX flow 6 5 9 6 6 4 8 Fixed issues that caused multithread deadlocks and races in the 6 4 11 6 5 9 system 9 Fixed wrong handling of IGMP packets in multithread 6 4 11 6 5 9 environment 10 Fixed wrong usage of route gateway information 6 4 11 6 5 0 11 TCP close socket active and passive sides buffer leaks 6 4 11 segmentation faults hangs 12 IGMP handling buffer leak when having IB MC over IPR to a 6 4 11 router 13 VMA does not handle MSG_TRUNC correctly 6 4 11 14 TCP EPOLL on non offloaded listen socket does not deliver 6 4 11 events hangs on new connection 15 Receive timeout using SO RCVTIMEO set to zero should block 6 4 11 _ 7 Mellanox Technologies Confidential Mellanox Messaging Accelerator VMA Release Notes Version 6 9 1 4 Known Issues The following table describes known issues in VMA 6 8 3 and existing workarounds Table 5 VMA Known Issues E S Mellanox Technologies Confidential Subject High Availability HA VLAN and High Availability Issues with UDP fragmented traffic reassembly VMA_TRACELEVEL 4 causes performance degradation Huge page reserved resources VMA_PANIC while openin
5. 7 Table 11 VMA TCP Latency Benchmark 8 8480 7 18 Table 2 Benchmark eee Se edness eee corte 18 Table 13 VMA UDP Unicast Throughput Benchmark Results 19 Table 14 VMA TCP Unicast Throughput Benchmark 19 Table 15 VMA UDP Unicast Latency Benchmark 20 Table 16 VMA TCP Latency Benchmark 20 Mellanox Technologies Confidential Mellanox Messaging Accelerator VMA Release Notes Version 6 9 1 1 Introduction These release notes pertain to the Mellanox Messaging Accelerator VMA library for Linux software version 6 9 1 The VMA library accelerates TCP and UDP socket applications by offloading traffic from the user space directly to the network interface card NIC or Host Channel Adapter HCA without going through the kernel and the standard IP stack kernel bypass VMA increases overall traffic packet rate reduces latency and improves CPU utilization 1 1 System Requirements for VMA 6 9 1 The following table presents the currently certified combinations of stacks and platforms and supported CPU architectures for VMA 6 9 1 Table 1 System Requirements Specification Value Network Adapter Cards ConnectX 3 ConnectX 3 Pro ConnectX 3 ConnectX 3 Pro v2 34 5000 Supported Operating Systems and All Linux 64 bit distributions suppor
6. Mellanox TECHNOLOGIES Connect Accelerate Outperform Mellanox Messaging Accelerator VMA Release Notes Version 6 9 1 www mellanox com Mellanox Technologies Confidential NOTE THIS HARDWARE SOFTWARE OR TEST SUITE PRODUCT PRODUCT S AND ITS RELATED DOCUMENTATION ARE PROVIDED BY MELLANOX TECHNOLOGIES AS IS WITH ALL FAULTS OF ANY KIND AND SOLELY FOR THE PURPOSE OF AIDING THE CUSTOMER IN TESTING APPLICATIONS THAT USE THE PRODUCTS IN DESIGNATED SOLUTIONS THE CUSTOMER S MANUFACTURING TEST ENVIRONMENT HAS NOT MET THE STANDARDS SET BY MELLANOX TECHNOLOGIES TO FULLY QUALIFY THE PRODUCT S AND OR THE SYSTEM USING IT THEREFORE MELLANOX TECHNOLOGIES CANNOT AND DOES NOT GUARANTEE OR WARRANT THAT THE PRODUCTS WILL OPERATE WITH THE HIGHEST QUALITY ANY EXPRESS OR IMPLIED WARRANTIES INCLUDING BUT NOT LIMITED TO THE IMPLIED WARRANTIES OF MERCHANTABILITY FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT ARE DISCLAIMED IN NO EVENT SHALL MELLANOX BE LIABLE TO CUSTOMER OR ANY THIRD PARTIES FOR ANY DIRECT INDIRECT SPECIAL EXEMPLARY OR CONSEQUENTIAL DAMAGES OF ANY KIND INCLUDING BUT NOT LIMITED TO PAYMENT FOR PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES LOSS OF USE DATA OR PROFITS OR BUSINESS INTERRUPTION HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY WHETHER IN CONTRACT STRICT LIABILITY OR TORT INCLUDING NEGLIGENCE OR OTHERWISE ARISING IN ANY WAY FROM THE USE OF THE PRODUCT S AND RELATED DOCUMENTATION EVEN IF ADVI
7. R These results are the RTT 2 for a ping pong test Table 16 VMA TCP Latency Benchmark Results VMA 6 8 3 with ConnectX 3 PCI Gen3 Message Size Bytes Latency usec 16 1 650 32 1 675 64 1 755 128 1 660 256 2 140 512 2 270 1024 2 590 20 Mellanox Technologies Confidential
8. SED OF THE POSSIBILITY OF SUCH DAMAGE Mellanox TECHNOLOGIES Mellanox Technologies 350 Oakmead Parkway Suite 100 Sunnyvale CA 94085 U S A www mellanox com Tel 408 970 3400 Fax 408 970 3403 Copyright 2015 Mellanox Technologies All Rights Reserved Mellanox Mellanox logo BridgeX ConnectX Connect IB CoolBox CORE Direct GPUDirect InfiniBridge InfiniHost InfiniScale Kotura Kotura logo MetroX MLNX OS PhyX ScalableHPC SwitchX TestX Virtual Protocol Interconnect Voltaire and Voltaire logo are registered trademarks of Mellanox Technologies Ltd ExtendX M FabricIT FPGADirect HPC X Care CloudX Open Ethernet Mellanox PeerDirect Mellanox Virtual Modular Switch MetroDX NVMeDirect StPU Switch IB Unbreakable Link are trademarks of Mellanox Technologies Ltd All other trademarks are property of their respective owners 2 Document Number DOC 00329 Mellanox Technologies Confidential Table of Contents Version 6 9 1 Table of Contents UM MEMEO GUC TON tS 5 14d System Requirements for 1 debo p ilo d 5 1 2 Release 5 1200 5 CUIME
9. Table 13 VMA UDP Unicast Throughput Benchmark Results VMA 6 8 3 with ConnectX 3 PCI Gen3 Message Size Bytes Bandwidth Gb s 16 0 539 32 1 045 64 2 054 128 3 670 256 7 259 312 13 353 1024 23 817 1472 31 469 A 2 3 VMA TCP Unicast Throughput Netperf TCP STREAM Benchmark Using ConnectX 3 The following table shows the TCP unicast throughput benchmark results from the test application netperf TCP STREAM Table 14 VMA TCP Unicast Throughput Benchmark Results VMA 6 8 3 with ConnectX 3 PCI Gen3 Message Size Bytes Bandwidth Gb s 16 1 715 32 3 251 64 5 779 128 8 255 256 14 052 512 18 123 1024 21 311 1460 23 907 214 eJI Mellanox Technologies Confidential Version 6 9 1 Change Log History 2 4 VMA UDP Unicast Latency Netperf UDP RR Benchmark with ConnectX 3 The following table shows the UDP unicast latency benchmark results from the test application netperf UDP RR These results are the RTT 2 for a ping pong test Table 15 VMA UDP Unicast Latency Benchmark Results VMA 6 8 3 with ConnectX 3 PCI Gen3 Message Size Bytes Latency usec 16 1 145 32 1 175 64 1 220 128 1 320 256 1 700 512 1 645 1024 2 125 A 2 5 VMA TCP Latency Netperf TCP RR Benchmark with ConnectX 3 The following table shows the TCP latency benchmark results from the test application netperf R
10. aire Bandwidth and Latency Version 1 4 0 Benchmarking NetPIPE Open Source Network Protocol Version 3 7 2 Independent Performance Evaluator UMS formerly Informatica Message Middleware Version 6 7 LBM Infrastructures Opra FeedHandler NYSE Market Data Infrastructures Running with Technologies WDF LBM UMS RV WombatFS middleware Mellanox Technologies Confidential Mellanox Messaging Accelerator VMA Release Notes 2 Changed in Rev 6 9 1 from Rev 6 8 3 The following describe the main changes and new features in v6 8 3 Reduce contention between VMA internal thread and the user threads Move control flow tasks to VMA internal thread for improving latency Add syn fin throttling support for improving latency during connection control tasks Support creating vma_stats shared memory files in a given directory Add retransmission counters to vma_stats Support RX CSUM verification offload depending on device support Handle DEVICE_FATAL event to support hot unplug Version 6 9 1 E 4 Mellanox Technologies Confidential Version 6 9 1 Bug Fixes History 3 Bug Fixes History The following describe the issues that have been resolved in VMA Table 4 Bug Fixes History Description Discovered Fixed in in Release Release 1 Fixed SO RCVBUF and SO SNDBUF for TCP 6 8 3 6 9 1 2 Fixed crash when there is no route back to syn sender 6 8 3 6 9 1 3 Fixed
11. ance Data A 1 A 1 1 A 1 2 A 1 3 Ethernet Performance Data The performance envelope of the VMA library is described in the following sections VMA Benchmark Configuration with ConnectX 3 The following table describes the setup for the VMA library benchmarking Table 7 Benchmark Setup Specifications Details VMA UDP Unicast Throughput Netperf UDP STREAM Benchmark Using ConnectX 3 The following table shows the UDP unicast throughput benchmark results from the test application netperf UDP STREAM Table 8 VMA UDP Unicast Throughput Benchmark Results VMA 6 8 3 with ConnectX 3 PCI Gen3 Message Size Bytes Bandwidth Gb s 16 0 527 32 1 057 64 1 981 128 3 690 256 1 597 512 14 092 1024 24 813 1472 33 639 VMA TCP Unicast Throughput Netperf TCP STREAM Benchmark _ 7 Mellanox Technologies Confidential Mellanox Messaging Accelerator VMA Release Notes Using ConnectX 3 The following table shows the TCP unicast throughput benchmark results from the test application netperf TCP STREAM Table 9 VMA TCP Unicast Throughput Benchmark Results Version 6 9 1 VMA 6 8 3 with ConnectX 3 PCI Gen3 Message Size Bytes Bandwidth Gb s 16 1 713 32 3 233 64 5 828 128 9 532 256 14 323 512 18 681 1024 21 913 1460 24 880 1 4 VMA UDP Unicast Latency Netperf UDP RR Benchmark with ConnectX 3 The following
12. g large number of sockets There is limited support for fork Description As of VMA 6 6 4 the only bonding supported is the Active Passive mode and only with fail over mac 1 VLAN on the bond interface does not function properly when bonding is configured with fail over mac 1 due to a kernel bug RX UDP UC and MC traffic in Ethernet and RX UDP UC in InfiniBand with fragmented packages message size is larger than MTU is not offloaded by VMA and will pass through the Kernel network stack There might be performance degradation VMA TRACELEVEL 4 debug mode prints more info which causes higher latency The system runs out of memory due to huge page reserved resources The following PANIC will be displayed when there are not enough open files defined on the server VMA PANIC erlfdsl0235 351 5850c krbinmtot failed to create internal epoll ret 1 Too many open files Using fork in a program is limited to the following conditions A Parent process can continue running without any limitations on memory access Workaround RedHat ONLY Configure bonding over VLAN interfaces instead This solution is not applicable for SLES OSes For best performance run VMA with a lower than 4 VMA_TRACELEVEL value Use Contiguous Pages instead of Huge Pages to gain performance improvements The following parameter should be set as follow VMA MEM ALLOC TYPE 1 this is the default mode
13. id You can ensure that the parent process does not run before the child calls exec using application specific means The Posix system call is supported Enable SIGINT handling in VMA by using export VMA_HANDLE_SIGINTR 1 Limit the sender max PPS per receiver capacity Example below with the following configuration e O S Red Hat Enterprise e Linux Server release 6 2 Santiago Kernel r on an m Kernel 2 6 32 220 el6 x86 64 link layer InfiniBand 56G Ethernet 10G GEN type GEN3 Architecture x86 64 CPU 16 Core s per socket 8 CPU socket s 2 NUMA node s 2 Vendor ID GenuineIntel CPU family 6 Mellanox Messaging Accelerator VMA Release Notes Version 6 9 1 Subject Description Workaround e Model 45 e Stepping 7 e CPU MHz 2599 926 MC 1 socket max pps 3M MC 10 sockets Select max pos 1 5 20 sockets select max pps 50 sockets select max pps 1M UC 1 socket max pps 2 8M UC 10 sockets max pps 1 5M UC 20 sockets max pps 1 5M UC 50 sockets max pps VMA behavior of epoll VMA behavior of epoll EPOLLET EPOLLET Edge Edge Triggered and EPOLLOUT Triggered and flags with TCP sockets differs between EPOLLOUT flags with OS and VMA TCP sockets triggers EPOLLOUT event every received ACK only data not syn fin OS triggers EPOLLOUT event only after buffer was full VMA behavior of epoll VMA will trigger 2 ready even
14. is only occurs with poll flag Occasionally SENT STREAM UDP client hangs when running multiple times When running VMA with VMA_TX MAX INLINE 0 The following error will be received VMA ERROR qpm 0x16f7660 448 send farled post send ferrmo ll Resource temporarily unavailable VMA ERROR qpm 0x16f7660 450 send Dad wr 1020 WE SU 0xl0e6fa558 send tlags 0 addr 0xl0c699d0 length 60 lkey 0x46d00 max inline data 0 In this scenario the send operation will fail MC loopback in InfiniBand functions only between 2 different processes It will not work between threads in the same process Ethernet loopback functions only if both sides are either off loaded or not offloaded The following error may occur when running netperf TCP tests with VMA remote error 107 Transport endpoint is not connected Occasionally a packet is not sent if the socket is closed immediately after send also for blocking socket It can take for VMA more time than the OS to return from an call if all sockets in this iomux are empty sockets TCP throughput with maximum rate may suffer from traffic hiccups Workaround Set a higher acknowledgment waiting time value in the Stfnt stream Set the VMA TX MAX INLINE value to a smaller message size that the used in the application Use netperf 2 6 0 Wait several seconds after send before closing the socket Set the mos 1000000 Netcat on SLES11
15. ps support e Added support for accept4 system call e Added support for SO BINDTODEVICE socket option e Added support for SOCK_NONBLOCK and SOCK_CLOEXEC socket flags e Added ring statistics to vma_stats 6 5 9 e Added support for all Linux OSs supported in MLNX_OFED 2 1 1 0 0 e Improved TCP latency performance e Improved VMA blacklist e Added EXTRA API to control off load capabilities e Added TCP cubic congestion control algorithm e Increased the number of supported sockets to thousands of sockets e Added IP_PKTINFO support in UDP recvmsg e ETH loopback support 6 4 11 e Ubuntu 12 04 OS support e Improved TCP stability e Improved performance of TCP throughput e Improved performance of applications with multiple epoll instances e Added TCP window scaling and Generic Receive Offload GRO e Support for Ethernet Unicast loopback via Kernel network stack not offloaded e Support sendmmsg e Support epoll_pwait pselect and ppoll 14 Mellanox Technologies Confidential Mellanox Messaging Accelerator VMA Release Notes Version 6 9 1 Release Description Support 1 in setsockopt ADD DROP MEMBERSHIP and MULTICAST_IF e Support TCP socket MSG_PEEK recv flag e Support TCP socket MSG_DONTWAIT send flag e Support getsockopt SOL SOCKET SO ERROR 2 14 7 Mellanox Technologies Confidential Version 6 9 1 Change Log History Appendix A Perform
16. rk Configuration with 3 18 A 2 2 Unicast Throughput Netperf UDP STREAM Benchmark Using Connec ot MO C 19 A 2 3 VMA TCP Unicast Throughput Netperf TCP STREAM Benchmark Using cU 19 24 VMA UDP Unicast Latency Netperf UDP RR Benchmark with ConnectX 3 20 A 2 5 VMA TCP Latency Netperf RR Benchmark with ConnectXQ 3 20 ME A Mellanox Technologies Confidential Version 6 9 1 Table of Contents List of Tables Table T1 5ystem Heguiremellls sxc secs 5 Table 2 Release GOlelbll Su scene id oct tirade 5 Table 3 VMA GCert ri d ADplICallOrs c peti Sc ev boe ee els 6 Bug Fixes HIStory iai desti iat uoa due ie ab 8 Table 5 VMA Known 5 5 9 Table 6 change bog FiStON eet oue ases av iubeat uc doo to ood bo Dune 14 Benchmark a bot eom bac ote tun Inani ente Roo bout Macau bae Mat 16 Table 8 VMA UDP Unicast Throughput Benchmark Results 16 Table 9 VMA TCP Unicast Throughput Benchmark 17 Table 10 VMA UDP Unicast Latency Benchmark 1
17. t more time than take up to several tens of milliseconds expected Calling select after Calling select upon shutdown of socket N A shutdown write will return ready to write instead of returns socket ready to timeout write while select 1s expected to return timeout VMA does not raise VMA does not raise sigpipe in sigpipe connection shutdown When there are no VMA polls the CQ for packets if no packets in the socket it packets are available in the socket takes longer to return layer it takes longer from the read call Select with more than Compile VMA with 1024 sockets is not SELECT BIG SETSIZE supported defined LLL 8 Mellanox Technologies Confidential Version 6 9 1 Change Log History 5 Change Log History Table 6 Change Log History Release Description 6 8 3 e Added support for all Linux OSs supported in MLNX_OFED 2 4 e Added support for TCP zero copy in the extra API 6 7 2 e Added support for all Linux OSs supported in MLNX_OFED v2 3 1 0 X e Added support for routing rules and secondary route tables e Added support for ARM 64 bit architecture at beta level e Added support for PowerPC 64 bit architecture at beta level 6 6 4 e Added support to all Linux Operating Systems supported in MLNX_OFED 2 2 1 0 0 e Improved interrupt driven mode performance e Added interrupt moderation and adaptive interrupt moderation support e Added UDP software timestam
18. table shows the UDP unicast latency benchmark results from the test application netperf UDP RR These results are the RTT 2 for a ping pong test Table 10 VMA UDP Unicast Latency Benchmark Results VMA 6 8 3 with ConnectX 3 PCI Gen3 Message Size Bytes Latency usec 16 1 125 32 1 175 64 1 250 128 1 310 256 1 640 512 1 840 1024 2 155 LLL x 4 Mellanox Technologies Confidential Version 6 9 1 Change Log History A 1 5 VMA TCP Latency Netperf TCP RR Benchmark with ConnectX 3 The following table shows the TCP latency benchmark results from the test application netperf TCP RR These results are the RTT 2 for a ping pong test Table 11 VMA TCP Latency Benchmark Results VMA 6 8 3 with ConnectX 3 PCI Gen3 Message Size Bytes Latency usec 16 1 650 32 1 700 64 1 755 128 1 850 256 2 090 512 2 310 1024 2 665 A 2 InfiniBand Performance Data A 2 1 VMA Benchmark Configuration with ConnectX 3 The following table describes the setup for the VMA library benchmarking Table 12 Benchmark Setup Specifications Details Mellanox Technologies Confidential Mellanox Messaging Accelerator VMA Release Notes Version 6 9 1 A 2 2 VMA UDP Unicast Throughput Netperf UDP STREAM Benchmark Using ConnectX 3 The following table shows the UDP unicast throughput benchmark results from the test application netperf UDP STREAM
19. ted by MLNX_OFED Kernels 3 0 Minimum memory requirements 1 GB of free memory for installation 1 2 VMA Release Contents Table 2 Release Contents Item Description Binary RPM and DEB packages for 64 bit architecture for Linux libvma 6 9 1 0 x86_64 rpm distribution libvma 6 9 1 0 x86_64 deb Documentation VMA Release Notes Installation Guide VMA User Manual A Mellanox Technologies Confidential Version 6 9 1 1 3 1 4 Related Documentation Mellanox Messaging Accelerator VMA Library for Linux User Manual DOC 00393 Mellanox VMA Installation Guide DOC 10055 Performance Tuning Guidelines for Mellanox Network Adapters DOC 3368 available at www mellanox com Certified Applications The VMA library version 6 9 1 was successfully tested and is certified to work with the applications listed in the following table Table 3 VMA Certified Applications Application Company Application Type Notes Source Memcached Open Source High performance Version 1 4 20 distributed memory object http memcached org caching system Advanced key value store http redis 10 sockperf Mellanox Open Bandwidth and Latency Version 2 5 243 Source Benchmarking Included in VMA package 2 google com Bandwidth Benchmarking Version 2 0 5 netperf Open Source Bandwidth and Latency Version 2 6 0 Benchmarking sfnt Solarfl
20. ts EPOLLET Edge instead of 1 in case of epoll with Triggered and EPOLLET and EPOLLOUT flags with EPOLLOUT flags with UDP sockets UDP sockets VMA does not close VMA does not close connections connections upon sends FIN when its own process is process termination terminated e g CTRL C MC traffic with VMA When non offloaded process joins the Run both processes with process and non VMA same MC address as another VMA VMA process on the same process on the same machine non machine offloaded process will not get traffic Epoll with Occasionally Epoll with EPOLLONESHOT EPOLLONESHOT does not function properly SFNT STREAM UDP Occasionally when running UDP Set a higher acknowledgment with poll muxer flag SFNT STREAM client with poll muxer waiting time value in the ends with an error on flag the client side ends with an sfnt stream client side expected error 11 Mellanox Technologies Confidential Version 6 9 1 Known Issues Subject SFNT STREAM UDP client hanging issue When using VMA TX INLI 0 post send fails MC loopback in InfiniBand Ethernet loopback is not functional between the VMA and the OS Error when running netperf 2 4 4 with VMA A packet is not sent if the socket is closed immediately after send Iomux call with empty sockets TCP throughput with maximum rate Description ERROR Sync messages at end of test lost ERROR Test failed Th

Download Pdf Manuals

image

Related Search

Related Contents

Sony DSC-W200 Instruction Manual  Chief KSA1023B CPU holder  User guide  Samsung SP-A900B User's Manual  The Gesture Pendant - College of Computing  Philips MCM118B Micro Hi-Fi System  取扱説明書 足下すっきりカウンター リトイレ専用手洗いカウンター 給排水  Dynaudio BM6A MKII  Manual de Instrucciones EVO 1272D para imprenta  Audio Expert System Reference & User Guide  

Copyright © All rights reserved.
Failed to retrieve file