we are using this environment since 1.5 yr and over the period Network packets were dropped to 818733, but observed that dropping rate is very minimal on daily basis.
I want to check how this is going to impact data loading in data nodes?
please suggest me if we have any solution to re-set this value to ZERO.
[root@abc ~]# ifconfig eth1 Link encap:Ethernet HWaddr 00:25:B5:00:00:31 inet addr:00.00.00.00 Bcast:00.00.00.00 Mask:255.255.255.0 inet6 addr: fe80::225:b5ff:fe00:31/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1 RX packets:7929368254 errors:0 dropped:818733 overruns:0 frame:0 TX packets:1557926025 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:14650402362276 (13.3 TiB) TX bytes:2646572126036 (2.4 TiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:1096399714 errors:0 dropped:0 overruns:0 frame:0 TX packets:1096399714 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1620626363855 (1.4 TiB) TX bytes:1620626363855 (1.4 TiB) [root@abc ~]#
Flows that experience packet drops reduce their data rate and bandwidth consumed, with some flows getting very little if any bandwidth, while other flows that experience fewer packet drops receive more than their fair share of the available network bandwidth. This wide range of bandwidth as seen by different flows can lead to highly variable completion times for distributed applications that depend on the all flows or queries to complete. To reduce this effect, ‘’deep buffering’’ is preferable to low-latency in switches. Enabling Jumbo Frames across the cluster improves bandwidth through better checksums and possibly may also provide packet integrity. For more information, please reference our Cluster Planning Guide or contact your Hortonworks field representative.