Excitel Fiber Broadband Experience in Delhi NCR

he is a well known personality in networking sector.
Haha. Not that I agree to that conclusion but thanks for the kind words. I see the post to which you replied this is deleted now.


Anyways, some updates on more testing. It's 5am right now. Absolutely off peak time. I wanted to test more around this time to find possible policies impacting speeds without getting impacted due to possible congestion.

I ran a dozen of iperf3 tests against three different servers of mine. On my 400Mbps plan, this is what I get (testing via my Linux desktop connected over wire ofcourse):

Single thread:
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 56.3 MBytes 472 Mbits/sec
[ 5] 1.00-2.00 sec 38.5 MBytes 323 Mbits/sec
[ 5] 2.00-3.00 sec 33.4 MBytes 280 Mbits/sec
[ 5] 3.00-4.00 sec 21.6 MBytes 181 Mbits/sec
[ 5] 4.00-5.00 sec 22.6 MBytes 190 Mbits/sec
[ 5] 5.00-6.00 sec 15.2 MBytes 128 Mbits/sec
[ 5] 6.00-7.00 sec 968 KBytes 7.93 Mbits/sec
[ 5] 7.00-8.00 sec 495 KBytes 4.06 Mbits/sec
[ 5] 8.00-9.00 sec 615 KBytes 5.04 Mbits/sec
[ 5] 9.00-10.00 sec 426 KBytes 3.49 Mbits/sec
[ 5] 10.00-11.00 sec 338 KBytes 2.77 Mbits/sec
[ 5] 11.00-12.00 sec 630 KBytes 5.16 Mbits/sec
[ 5] 12.00-13.00 sec 729 KBytes 5.97 Mbits/sec
[ 5] 13.00-14.00 sec 508 KBytes 4.17 Mbits/sec
[ 5] 14.00-15.00 sec 464 KBytes 3.80 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-15.04 sec 196 MBytes 109 Mbits/sec 261 sender
[ 5] 0.00-15.00 sec 193 MBytes 108 Mbits/sec receiver

iperf Done.




Twenty threads:

- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-15.02 sec 35.2 MBytes 19.6 Mbits/sec 598 sender
[ 5] 0.00-15.00 sec 34.0 MBytes 19.0 Mbits/sec receiver
[ 7] 0.00-15.02 sec 35.4 MBytes 19.7 Mbits/sec 591 sender
[ 7] 0.00-15.00 sec 34.0 MBytes 19.0 Mbits/sec receiver
[ 9] 0.00-15.02 sec 37.5 MBytes 21.0 Mbits/sec 539 sender
[ 9] 0.00-15.00 sec 36.4 MBytes 20.3 Mbits/sec receiver
[ 11] 0.00-15.02 sec 39.2 MBytes 21.9 Mbits/sec 479 sender
[ 11] 0.00-15.00 sec 37.9 MBytes 21.2 Mbits/sec receiver
[ 13] 0.00-15.02 sec 33.6 MBytes 18.8 Mbits/sec 480 sender
[ 13] 0.00-15.00 sec 32.6 MBytes 18.2 Mbits/sec receiver
[ 15] 0.00-15.02 sec 37.1 MBytes 20.7 Mbits/sec 413 sender
[ 15] 0.00-15.00 sec 36.1 MBytes 20.2 Mbits/sec receiver
[ 17] 0.00-15.02 sec 33.5 MBytes 18.7 Mbits/sec 328 sender
[ 17] 0.00-15.00 sec 32.4 MBytes 18.1 Mbits/sec receiver
[ 19] 0.00-15.02 sec 33.9 MBytes 18.9 Mbits/sec 315 sender
[ 19] 0.00-15.00 sec 32.8 MBytes 18.3 Mbits/sec receiver
[ 21] 0.00-15.02 sec 39.1 MBytes 21.9 Mbits/sec 684 sender
[ 21] 0.00-15.00 sec 37.5 MBytes 21.0 Mbits/sec receiver
[ 23] 0.00-15.02 sec 33.0 MBytes 18.4 Mbits/sec 281 sender
[ 23] 0.00-15.00 sec 32.5 MBytes 18.2 Mbits/sec receiver
[ 25] 0.00-15.02 sec 37.7 MBytes 21.1 Mbits/sec 590 sender
[ 25] 0.00-15.00 sec 35.9 MBytes 20.1 Mbits/sec receiver
[ 27] 0.00-15.02 sec 28.7 MBytes 16.0 Mbits/sec 371 sender
[ 27] 0.00-15.00 sec 27.4 MBytes 15.3 Mbits/sec receiver
[ 29] 0.00-15.02 sec 38.8 MBytes 21.7 Mbits/sec 525 sender
[ 29] 0.00-15.00 sec 37.6 MBytes 21.0 Mbits/sec receiver
[ 31] 0.00-15.02 sec 27.9 MBytes 15.6 Mbits/sec 242 sender
[ 31] 0.00-15.00 sec 27.5 MBytes 15.4 Mbits/sec receiver
[ 33] 0.00-15.02 sec 33.1 MBytes 18.5 Mbits/sec 393 sender
[ 33] 0.00-15.00 sec 32.2 MBytes 18.0 Mbits/sec receiver
[ 35] 0.00-15.02 sec 27.9 MBytes 15.6 Mbits/sec 231 sender
[ 35] 0.00-15.00 sec 27.6 MBytes 15.4 Mbits/sec receiver
[ 37] 0.00-15.02 sec 47.6 MBytes 26.6 Mbits/sec 717 sender
[ 37] 0.00-15.00 sec 46.2 MBytes 25.8 Mbits/sec receiver
[ 39] 0.00-15.02 sec 26.7 MBytes 14.9 Mbits/sec 258 sender
[ 39] 0.00-15.00 sec 26.4 MBytes 14.8 Mbits/sec receiver
[ 41] 0.00-15.02 sec 34.6 MBytes 19.3 Mbits/sec 483 sender
[ 41] 0.00-15.00 sec 33.8 MBytes 18.9 Mbits/sec receiver
[ 43] 0.00-15.02 sec 42.5 MBytes 23.7 Mbits/sec 558 sender
[ 43] 0.00-15.00 sec 41.5 MBytes 23.2 Mbits/sec receiver
[SUM] 0.00-15.02 sec 703 MBytes 393 Mbits/sec 9076 sender
[SUM] 0.00-15.00 sec 682 MBytes 382 Mbits/sec receiver

iperf Done.


This is likely due to one of two reasons
  1. There is a (DPI?) device shaping traffic flows or some policy on devices facing upstream shaping the traffic
  2. They have too many LACP bundles of 10G ports to aggregate bandwidth and some of these are choking. Multiple threads go over different bundles reducing chances of having only one stream over choked port.

If #2 is true, we should not see this behaviour for outbound traffic considering fact that Excitel is heavily an inbound network. Their outbound direction would be lying almost empty (20-30% of what their inbound flows would be like). So test again but for uploads this time:


Single thread uploads:
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 17.8 MBytes 149 Mbits/sec 61 255 KBytes
[ 5] 1.00-2.00 sec 7.50 MBytes 62.9 Mbits/sec 2 138 KBytes
[ 5] 2.00-3.00 sec 5.00 MBytes 41.9 Mbits/sec 2 109 KBytes
[ 5] 3.00-4.00 sec 2.50 MBytes 21.0 Mbits/sec 1 93.5 KBytes
[ 5] 4.00-5.00 sec 2.50 MBytes 21.0 Mbits/sec 1 83.9 KBytes
[ 5] 5.00-6.00 sec 2.50 MBytes 21.0 Mbits/sec 1 76.8 KBytes
[ 5] 6.00-7.00 sec 1.25 MBytes 10.5 Mbits/sec 4 50.4 KBytes
[ 5] 7.00-8.00 sec 2.50 MBytes 21.0 Mbits/sec 0 70.8 KBytes
[ 5] 8.00-9.00 sec 2.50 MBytes 21.0 Mbits/sec 0 91.1 KBytes
[ 5] 9.00-10.00 sec 2.50 MBytes 21.0 Mbits/sec 0 112 KBytes
[ 5] 10.00-11.00 sec 3.75 MBytes 31.5 Mbits/sec 2 92.3 KBytes
[ 5] 11.00-12.00 sec 3.75 MBytes 31.5 Mbits/sec 0 113 KBytes
[ 5] 12.00-13.00 sec 3.75 MBytes 31.5 Mbits/sec 0 131 KBytes
[ 5] 13.00-14.00 sec 2.50 MBytes 21.0 Mbits/sec 3 113 KBytes
[ 5] 14.00-15.00 sec 3.75 MBytes 31.5 Mbits/sec 1 94.7 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-15.00 sec 64.1 MBytes 35.8 Mbits/sec 78 sender
[ 5] 0.00-15.04 sec 60.8 MBytes 33.9 Mbits/sec receiver



Twenty thread uploads

- - - - - - - - - - - - - - - - - - - - - - - - -
[ 23] 0.00-15.00 sec 25.4 MBytes 14.2 Mbits/sec 90 sender
[ 23] 0.00-15.03 sec 24.3 MBytes 13.6 Mbits/sec receiver
[ 25] 0.00-15.00 sec 30.1 MBytes 16.8 Mbits/sec 54 sender
[ 25] 0.00-15.03 sec 29.5 MBytes 16.4 Mbits/sec receiver
[ 27] 0.00-15.00 sec 31.8 MBytes 17.8 Mbits/sec 64 sender
[ 27] 0.00-15.03 sec 31.0 MBytes 17.3 Mbits/sec receiver
[ 29] 0.00-15.00 sec 35.7 MBytes 19.9 Mbits/sec 61 sender
[ 29] 0.00-15.03 sec 35.0 MBytes 19.6 Mbits/sec receiver
[ 31] 0.00-15.00 sec 36.5 MBytes 20.4 Mbits/sec 57 sender
[ 31] 0.00-15.03 sec 35.9 MBytes 20.1 Mbits/sec receiver
[ 33] 0.00-15.00 sec 28.1 MBytes 15.7 Mbits/sec 45 sender
[ 33] 0.00-15.03 sec 27.2 MBytes 15.2 Mbits/sec receiver
[ 35] 0.00-15.00 sec 25.3 MBytes 14.2 Mbits/sec 29 sender
[ 35] 0.00-15.03 sec 25.2 MBytes 14.0 Mbits/sec receiver
[ 37] 0.00-15.00 sec 26.0 MBytes 14.5 Mbits/sec 32 sender
[ 37] 0.00-15.03 sec 25.8 MBytes 14.4 Mbits/sec receiver
[ 39] 0.00-15.00 sec 31.3 MBytes 17.5 Mbits/sec 24 sender
[ 39] 0.00-15.03 sec 31.0 MBytes 17.3 Mbits/sec receiver
[ 41] 0.00-15.00 sec 26.2 MBytes 14.6 Mbits/sec 36 sender
[ 41] 0.00-15.03 sec 26.0 MBytes 14.5 Mbits/sec receiver
[ 43] 0.00-15.00 sec 23.8 MBytes 13.3 Mbits/sec 22 sender
[ 43] 0.00-15.03 sec 23.6 MBytes 13.2 Mbits/sec receiver
[SUM] 0.00-15.00 sec 600 MBytes 336 Mbits/sec 1192 sender
[SUM] 0.00-15.03 sec 586 MBytes 327 Mbits/sec receiver

Again similar behaviour. But since ICMP stays very stable even to far off locations, this hints use of some sort of overall traffic shaping policy.
 
@Anurag Bhatia Legend himself on this forum. Hello sir, Good to see you here 😄.
Haha. Nothing like that. Thanks for your kind words and greeting @unnecessary


Optical power has gone down again.
Transmiting Light Power:2.2dBm
Receiving light power:-24.7dBm

That seems overall OK for the specs. Power can go as low as -27 or even -28dbm for most cases without causing issues. However they (LCOs in this case) try for much better margin numbers of -22 to -23. This gives sufficient margin for issues in cables (common due to monkeys & what not for overhead cables) and also give safer margin to split near subscriber end as well as in the backend for newer connections. Except Jio & Airtel most of other networks add PON splitters on the go. They won't simply do a 8 or 16 split per pole. For Jio that made sense based on the scale, anticipation of users and an overall design which works across country (without having to re-do it again and again).

For smaller players that would be quite a bit of cost to do that in advance. On their advantage - smaller players can react faster than large company like Jio. For smaller networks, LCO based models - you can expect higher power as a smaller network is deployed and it will reduce over time as they add more splits to add customers.



Correction on my previous conclusion on expectation on peered routes

Earlier I mentioned:
Regular internet usage is lately Google/YT, Facebook, Netflix, Hotstar via Akamai etc and they are fine covering that part. Remaining internet (less than 20% of their traffic) going outside of NCR via TCL transit is something where performance suffers greatly.

As I used Excitel more, I have come to conclusion that even peered routes are having traffic shaping issues and do not give expected/sufficient bandwidth. Here's a demo of push of 4GB dump to AWS S3 in Mumbai which goes over Excitel-AWS peering in Delhi.


Single uploads peak at ~72Mbps
anurag@desktop ~> rclone copy -P ~/Downloads/ubuntu-22.04.3-desktop-amd64.iso s3:bb-forum-demo/
2023/12/28 04:07:40 NOTICE: S3 bucket bb-forum-demo: Switched region to "ap-south-1" from "eu-west-1"
Transferred: 4.692 GiB / 4.692 GiB, 100%, 9.135 MiB/s, ETA 0s
Transferred: 1 / 1, 100%
Elapsed time: 8m44.0s
anurag@desktop ~>

I see similar speeds even when doing 20 parallel transfers.



And again to be sure this isn't some NLD congestion between Excitel's PoP at STT / Tata Comm DC in Delhi and Rohtak, I see 400Mbps symmetric on both speedtest.net and fast.com or any other issue say on my home LAN.

15681880225.png

Screenshot+from+2023-12-28+04-25-06.png





This differs significantly than my primary connection from a local ISP (IAXN). This is a 150Mbps connection (Ookla test here). Upload performance on 150Mbps plan:

IAXN 150Mbps plan, similar upload to AWS S3 at ~ 136Mbps

anurag@desktop ~> rclone copy -P ~/Downloads/ubuntu-22.04.3-desktop-amd64.iso s3:bb-forum-demo/iaxn/
2023/12/28 04:29:10 NOTICE: S3 bucket bb-forum-demo path iaxn: Switched region to "ap-south-1" from "eu-west-1"
Transferred: 4.692 GiB / 4.692 GiB, 100%, 17.382 MiB/s, ETA 0s
Transferred: 1 / 1, 100%
Elapsed time: 4m40.2s
anurag@desktop ~>




Could AWS S3 Mumbai could be bottleneck here?
Here's test from my dedicated server in Equinix Mumbai. Server connects to same router which has a PNI with AWS in Mumbai and server is 0.2ms from AWS S3 endpoint:
anurag@host01 ~> ping -c 5 52.219.156.142
PING 52.219.156.142 (52.219.156.142) 56(84) bytes of data.
64 bytes from 52.219.156.142: icmp_seq=1 ttl=244 time=0.252 ms
64 bytes from 52.219.156.142: icmp_seq=2 ttl=244 time=0.286 ms
64 bytes from 52.219.156.142: icmp_seq=3 ttl=244 time=0.276 ms
64 bytes from 52.219.156.142: icmp_seq=4 ttl=244 time=0.267 ms
64 bytes from 52.219.156.142: icmp_seq=5 ttl=244 time=0.279 ms

--- 52.219.156.142 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4104ms
rtt min/avg/max/mdev = 0.252/0.272/0.286/0.011 ms
anurag@host01 ~>
anurag@host01 ~> rclone copy -P ubuntu-22.04.3-desktop-amd64.iso s3:bb-forum-demo/s3-bom-test/
2023-12-28 04:56:51 NOTICE: S3 bucket bb-forum-demo path s3-bom-test: Switched region to "ap-south-1" from "eu-west-1"
Transferred: 4.692 GiB / 4.692 GiB, 100%, 83.107 MiB/s, ETA 0s
Transferred: 1 / 1, 100%
Elapsed time: 1m7.1s
anurag@host01 ~>

This confirms max possible speed of atleast 664Mbps here way above 400Mbps Excitel plan which I am testing.



Conclusion

Thus sadly while Excitel seems one of best networks latency wise in my area (due to choice of their NLD), it's barely useful for anything heavy. I save 4-5ms on latency but loose out completely on the performance.

Excitel has probably priced plan low enough for now that they are making it work by treating popular speedtests (Ookla, fast.com etc) differently than actual traffic. Made it my backup provider for next three months, will re-test a week before end of three months and likely will just drop it (unless something dramatic changes).
 
Back