Back to blog
How-To

Tech How-to: Troubleshooting Cisco’s UC Firewall Rules

John Marrett, Managed Network Service Technical Lead John Marrett, Managed Network Service Technical Lead March 11, 2021

Is your monitoring system detecting erroneous outages? Are you noticing unusual network behaviour? Chances are it’s due to firewall rules on some Cisco UC appliances.

In this post, I will show you how to check the firewall rules, patch the problem, and diagnose the cause.

What’s happening and why

As of version 10.5 and above, appliances such as CUCM, UCCX and possibly others integrate basic firewall functionality that is intended to protect the system from denial-of-service attacks.

While the rules as built are well-intentioned, they are unfortunately unlikely to actually protect the system from attack.

These denial-of-service rules can easily cause monitoring systems to detect outages and cause unusual behaviour that will confuse users and administrators.

To our knowledge, the following versions are where the firewall functionality was implemented:

  • 12.5(1.10000.22)
  • 12.5(0.98000.33)
  • 12.0(1.21002.1)
  • 11.5(1.14900.11)
  • 11.5(1.14060.1)
  • 11.0(1.24082.1)
  • 10.5(2.16900.10)
  • 10.5(2.16900.4)
  • 10.5(2.16135.1)

Checking the rules with iptables

Cisco has used the iptables framework inside the Linux operating system underlying their appliances. You can use the command  utils firewall ipv4 list  to see the iptables rules.

Here are the rules in CUCM 12.5 that block excessive amounts of ICMP traffic:

8    ACCEPT    icmp --  0.0.0.0/0      0.0.0.0/0      limit: avg 10/sec burst 5
9    LOG       icmp --  0.0.0.0/0      0.0.0.0/0      limit: avg 1/min burst 5 LOG flags 0 level 4 prefix "ping flood "
10   DROP      icmp --  0.0.0.0/0      0.0.0.0/0

Note that different products and versions may have slightly different rules.

We can see that 10 packets per second will be accepted by CUCM. Anything exceeding this level including a small amount of burst will be rejected.

When we send some pings to a test server, we receive a response as expected:

johnf@johnf-XPS-13-9300:~$ ping -c 4 cucmv12-5.labs.stack8.com
PING cucmv12-5.labs.stack8.com (x.x.220.125) 56(84) bytes of data.
64 bytes from x.x.220.125 (x.x.220.125): icmp_seq=1 ttl=61 time=10.3 ms
64 bytes from x.x.220.125 (x.x.220.125): icmp_seq=2 ttl=61 time=6.43 ms
64 bytes from x.x.220.125 (x.x.220.125): icmp_seq=3 ttl=61 time=5.70 ms
64 bytes from x.x.220.125 (x.x.220.125): icmp_seq=4 ttl=61 time=5.13 ms
--- cucmv12-5.labs.stack8.com ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3004ms rtt min/avg/max/mdev = 5.131/6.890/10.303/2.022 ms

However, if we send large amounts of ICMP traffic to the server, we start to run into the rules that block traffic.

Here are the results of a thousand pings sent with very aggressive timeouts:

1000 packets transmitted, 154 received, 84.6% packet loss, time 14911ms rtt min/avg/max/mdev = 3.342/5.890/24.650/2.319 ms, pipe 2, ipg/ewma 14.926/5.348 ms

Despite the blocked ping traffic, other traffic is still permitted—even from the host that’s generating the blocked traffic.

Here you can see a SSH session reaching the firewall, even while it’s only responding to 1 packet in 10:

Real-world example

We experienced this firewall behavior after performing an upgrade for a client. The rules were implemented for all ICMP traffic and not just the ping.

Due to an unusual network configuration, all traffic sent by the server received back ICMP redirect responses. This had apparently worked without issue on the previous version of CUCM; however, after the upgrade, the high levels of ICMP traffic caused almost all ping traffic to this server to be dropped.

Here you can see the ICMP traffic generated in response to each and every RTP packet from a voice call media stream:

Patching the problem while you investigate

It is possible to use the  utils network firewall disable  command to turn off the firewall for up to 24 hours. This can allow the system to operate while you work to identify the source of the problematic traffic.

Diagnosing the cause of the problem

You can use packet captures of logs to identify the cause of your issue. If the problem is ongoing, packet captures may be the best method as they are easy to perform and collect. If the problem is intermittent, brief or self-corrected, logs will allow you to understand historical issues.

You can use the RTMT tool to collect the syslog messages files. Look for the string ping flood to identify the source of the traffic inside these files:

Feb 26 11:04:20 cucmv12-5 kern 4 kernel:ping flood IN=eth0 OUT= MAC=00:50:56:b2:49:41:18:9c:5d:28:22:5d:08:00 SRC=x.x.40.5 DST=x.x.220.125 LEN=84 TOS=0x00 PREC=0x00 TTL=61 ID=805 DF PROTO=ICMP TYPE=8 CODE=0 ID=53 SEQ=6

Feb 26 11:04:20 cucmv12-5 kern 4 kernel:ping flood IN=eth0 OUT= MAC=00:50:56:b2:49:41:18:9c:5d:28:22:5d:08:00 SRC=x.x.40.5 DST=x.x.220.125 LEN=84 TOS=0x00 PREC=0x00 TTL=61 ID=807 DF PROTO=ICMP TYPE=8 CODE=0 ID=53 SEQ=7

Feb 26 11:04:20 cucmv12-5 kern 4 kernel:ping flood IN=eth0 OUT= MAC=00:50:56:b2:49:41:18:9c:5d:28:22:5d:08:00 SRC=x.x.40.5 DST=x.x.220.125 LEN=84 TOS=0x00 PREC=0x00 TTL=61 ID=808 DF PROTO=ICMP TYPE=8 CODE=0 ID=53 SEQ=8

Feb 26 11:04:20 cucmv12-5 kern 4 kernel:ping flood IN=eth0 OUT= MAC=00:50:56:b2:49:41:18:9c:5d:28:22:5d:08:00 SRC=x.x.40.5 DST=x.x.220.125 LEN=84 TOS=0x00 PREC=0x00 TTL=61 ID=811 DF PROTO=ICMP TYPE=8 CODE=0 ID=53 SEQ=9

Feb 26 11:04:20 cucmv12-5 kern 4 kernel:ping flood IN=eth0 OUT= MAC=00:50:56:b2:49:41:18:9c:5d:28:22:5d:08:00 SRC=x.x.40.5 DST=x.x.220.125 LEN=84 TOS=0x00 PREC=0x00 TTL=61 ID=812 DF PROTO=ICMP TYPE=8 CODE=0 ID=53 SEQ=10

Feb 26 11:05:26 cucmv12-5 kern 4 kernel:ping flood IN=eth0 OUT= MAC=00:50:56:b2:49:41:18:9c:5d:28:22:5d:08:00 SRC=x.x.40.5 DST=x.x.220.125 LEN=84 TOS=0x00 PREC=0x00 TTL=61 ID=5648 DF PROTO=ICMP TYPE=8 CODE=0 ID=55 SEQ=6

Here you can see the rule is being triggered by our test system with IP x.x.40.5.

Questions?

If you’re an existing client contact the support team, if you aren’t yet a customer and could use some help, use the let’s talk button to chat, or call our sales team at +1 844 940 1600.

A secial shoutout goes to my colleagues Dishko Hristov, who wrote the original draft; and Orlin Genchev, who helped me collect the logs you see above while I recreated the abusive traffic for this write up.

Ready to take your unified communications from headache to hassle-free?

No throwing darts at proposals or contracts. No battling through the back-end. No nonsense, no run-around.