AN1138: Zigbee Mesh Network Performance

Similar documents
AN1137: Bluetooth Mesh Network Performance

EFM8 Laser Bee Family QSG110: EFM8LB1-SLSTK2030A Quick Start Guide

QSG119: Wizard Gecko WSTK Quick-Start Guide

Date CET Initials Name Justification

QSG144: CP2615-EK2 Quick-Start Guide

QSG155: Using the Silicon Labs Dynamic Multiprotocol Demonstration Applications

Bluegiga WF111 Software Driver Release Notes

UG345: Si72xx Eval Kit User's Guide

EFM8 Universal Bee Family EFM8UB2 Errata

Software Release Note

EFM32 Pearl Gecko Family QSG118: EFM32PG1 SLSTK3401A Quick- Start Guide

AN1143: Using Micrium OS with Silicon Labs Thread

AN999: WT32i Current Consumption

AN1106: Optimizing Jitter in 10G/40G Data Center Applications

EFR32MG13, EFR32BG13 & EFR32FG13 Revision C and Data Sheet Revision 1.0

QSG114: CPT007B SLEX8007A Kit Quick- Start Guide

EFM32 Happy Gecko Family EFM32HG-SLSTK3400A Quick-Start Guide

EFM8 Busy Bee Family EFM8BB2-SLSTK2021A Quick Start Guide

QSG123: CP2102N Evaluation Kit Quick- Start Guide

Router-E and Router-E-PA Wireless Router PRODUCT MANUAL

UG103.13: Application Development Fundamentals: RAIL

Date CET Initials Name Justification

AN1117: Migrating the Zigbee HA Profile to Zigbee 3.0

Translate HCSL to LVPECL, LVDS or CML levels Reduce Power Consumption Simplify BOM AVL. silabs.com Building a more connected world. Rev. 0.

SMBus. Target Bootloader Firmware. Master Programmer Firmware. Figure 1. Firmware Update Setup

Humidity/Temp/Optical EVB UG

AN976: CP2101/2/3/4/9 to CP2102N Porting Guide

ETRX3DVK Development Kit Quick-Start Guide

UG313: Thunderboard Sense 2 Bluetooth Low Energy Demo User's Guide

QSG107: SLWSTK6101A Quick-Start Guide

AN0059.1: UART Flow Control

UG103.7: Tokens Fundamentals

AN125 INTEGRATING RAISONANCE 8051 TOOLS INTO THE S ILICON LABS IDE. 4. Configure the Tool Chain Integration Dialog. 1. Introduction. 2.

Date CET Initials Name Justification

AN1160: Project Collaboration with Simplicity Studio

UG322: Isolated CAN Expansion Board User Guide

AN1083: Creating and Using a Secure CoAP Connection with ARM s mbed TLS

AN1095: What to Do When the I2C Master Does Not Support Clock Stretching

QSG107: SLWSTK6101A/B Quick-Start Guide

QSG107: SLWSTK6101A/B Quick-Start Guide

EFM32 EFM32GG11 Giant Gecko Family QSG149: EFM32GG11-SLSTK3701A Quick-Start Guide

EFR32 Mighty Gecko Family EFR32MG1 with Integrated Serial Flash Errata History

QSG159: EFM32TG11-SLSTK3301A Quick- Start Guide

UG274: Isolated USB Expansion Board User Guide

AN0059.0: UART Flow Control

UG366: Bluetooth Mesh Node Configuration User s Guide

UG369: Wireless Xpress BGX13P SLEXP8027A Kit User's Guide

The process also requires the use of the following files found in the Micriµm Quick Start Package for the FRDM-KL46Z:

µc/probe on the Freescale FRDM-KL05Z without an RTOS

Figure 1. CP2108 USB-to-Quad UART Bridge Controller Evaluation Board

Software Design Specification

8-Bit MCU C8051F85x/86x Errata

QSG153: Micrium s μc/probe Tool Quick- Start Guide

CP2110-EK CP2110 EVALUATION KIT USER S GUIDE. 1. Kit Contents. 2. Relevant Documentation. 3. Software Setup

Software Design Specification

UG365: GATT Configurator User s Guide

AN888: EZR32 Quick Start Guide

CP2103-EK CP2103 EVALUATION KIT USER S GUIDE. 1. Kit Contents. 2. Relevant Documentation. 3. Software Setup USBXpress Driver Development Kit

CP2104-EK CP2104 EVALUATION KIT USER S GUIDE. 1. Kit Contents. 2. Relevant Documentation. 3. Software Setup USBXpress Driver Development Kit

UG254: CP2102N-MINIEK Kit User's Guide

Si7005USB-DONGLE. EVALUATION DONGLE KIT FOR THE Si7005 TEMPERATURE AND HUMIDITY SENSOR. 1. Introduction. 2. Evaluation Kit Description

AN1154: Using Tokens for Non-Volatile Data Storage

UG271: CP2615-EK2 User's Guide

EFM8 Busy Bee EFM8BB1 Errata

EFM8 Universal Bee Family EFM8UB1 Errata

AN1139: CP2615 I/O Protocol

Software Design Specification

CP2114 Family CP2114 Errata

CP2105-EK CP2105 EVALUATION KIT USER S GUIDE. 1. Kit Contents. 2. Relevant Documentation. 3. Software Setup USBXpress Driver Development Kit

Si1146 UVIRSlider2EK Demo Kit

Wireless Development Suite (WDS) is a software utility used to configure and test the Silicon Labs line of ISM band RFICs.

BRD4300B Reference Manual MGM111 Mighty Gecko Module

Figure 1. Precision32 AppBuilder

Figure 1. Traditional Biasing and Termination for LVPECL Output Buffers

WT12 EVALUATION KIT DATA SHEET. Monday, 09 September Version 1.7

AN1083: Creating and Using a Secure CoAP Connection with ARM s mbed TLS

AN888: EZR32 Simple TRX Application Quick Start Guide

AN724: Designing for Multiple Networks on a Single Zigbee Chip

UG294: CPT213B SLEXP8019A Kit User's Guide

USBXpress Family CP2102N Errata

UG232: Si88xxxISO-EVB User's Guide

AN324 ADVANCED ENCRYPTION STANDARD RELEVANT DEVICES. 1. Introduction. 2. Implementation Potential Applications Firmware Organization

EFM32 Zero Gecko EFM32ZG Errata

Programming Options for Si5332

The Si50122-Ax-EVB is used to evaluate the Si50122-Ax. Table 1 shows the device part number and corresponding evaluation board part number.

QSG166: WF200 Wi-Fi Development Kit Quick Start Guide

AN706: EZSP-UART Host Interfacing Guide

QSG126: Bluetooth Developer Studio Quick-Start Guide

AGC. Radio DSP 1 ADC. Synth 0 AGC. Radio DSP 2 ADC. Synth 1. ARM Cortex M3 MCU. SPI/I2C Control Interface NVSSB SMODE SSB SCLK NVSCLK INTB

FM-DAB-DAB Seamless Linking I2S SRAM. I2C / SPI Host Interface. FLASH Interface MOSI/SDA INTB MISO/A0 RSTB SCLK/SCL SSB/A1 SMODE

UG352: Si5391A-A Evaluation Board User's Guide

Table 1. Kits Content. Qty Part Number Description. Si4010 Simplified Key Fob Demo Kit 868 MHz

UG361: Si70xx Evaluation Tools User's Guide

8-bit MCU Family C8051F93x/92x Errata

Si1140-DK. Si1140 DEVELOPMENT KIT USER S GUIDE. 1. Kit Contents. Figure 1. Si1143 Evaluation Board

TS9004 Demo Board FEATURES ORDERING INFORMATION

AN1119: Using RAIL for Wireless M-Bus Applications with EFR32

Not Recommended for New Designs. Si Data Sheet Errata for Product Revision B

μc/probe on the element14 BeagleBone Black

UDP UPPI Card UG UDP UPPI CARD USER S GUIDE. 1. Introduction. Figure 1. UPPI Cards with and without Radio

Transcription:

AN1138: Zigbee Mesh Network Performance This application note details methods for testing Zigbee mesh network performance. With an increasing number of mesh networks available in today s wireless market, it is important for designers to understand the different use cases among these networks and their expected performances. When selecting a network or device, designers need to know the network s performance and behavior characteristics, such as battery life, network throughput and latency, and the impact of network size on scalability and reliability. This application note demonstrates how the Zigbee mesh network differs in performance and behavior from other mesh networks. Tests were conducted using Silicon Labs Zigbee Mesh software stacks and the Wireless Gecko SoC platform capable of running Zigbee Mesh and Proprietary protocols. The test environment was a commercial office building with active Wi-Fi and Zigbee networks in range. Wireless test clusters were deployed in hallways, meeting rooms, offices, and open areas. The methodology for performing the benchmarking tests is defined so that others may run the same tests. These results are intended to provide guidance on design practices and principles as well as expected field performance results. KEY POINTS Wireless test network in Silicon Labs Boston office is described. Wireless conditions and environments are evaluated. Mesh network performance including throughput, latency, and large network scalability is presented. Additional performance benchmarking information for other technologies is available at http://www.silabs.com/mesh-performance. silabs.com Building a more connected world. Rev. 0.1

Introduction and Background 1. Introduction and Background Silicon Labs provides performance testing results from embedded mesh networks as part of developer conferences and industry white papers. The basic performance data of throughput, latency, and impact of security can be used by system designers to define expected behavior. Testing was done across our different mesh network technologies Zigbee, Thread, and Bluetooth - and each are presented separately. This application note presents the Zigbee network performance. 1.1 Underlying Physical Layer and Packet Structure Network performance is based on payload sizing since the application usage does not account for the packet overhead. Zigbee uses IEEE 802.15.4 2006 with 127-byte packets and an underlying data rate of 250 kbps. The Zigbee packet format is shown below and results in a 68-byte payload. For payloads above 68 bytes, Zigbee fragments into multiple packets. Our data on performance is based on payload size as this is the design parameter of concern when building an application. Figure 1.1. Zigbee Packet Format 1.2 Network Routing Differences Zigbee was designed for home and building automation. Zigbee supports several routing techniques including flooding of the mesh for route discovery or group messages, next hop routing for controlled messages in the mesh, and many-to-one routing to a gateway which then uses source routing out to devices. It is normal for a Zigbee network to use these methods at the same time. Networks fragment larger messages into smaller ones. For Zigbee, fragmentation is done at the application layer and is end-to-end from the source to the destination. For unicast forwarding within these networks, the message is forwarded as soon as the device is ready to send. For multicast forwarding, there are generally networking requirements for how messages are forwarded. For Zigbee devices, a multicast message is forwarded by a device only after a jitter of up to 64 milliseconds. However, the initiating device has a gap of 500 milliseconds before retransmitting the initial message. Note: This performance data is for the Silicon Labs implementation of these mesh networking stacks. As is shown in the test network and infrastructure provided for this testing, no tests were performed with other stacks or systems. silabs.com Building a more connected world. Rev. 0.1 2

Goals and Methods 2. Goals and Methods This application note defines a set of tests performed to evaluate mesh network performance, scalability, and reliability. The test conditions and infrastructure are described, in addition to the message latency and reliability. This testing is conducted over actual wireless devices in test networks and not in simulation. This testing is done to provide a relative comparison between the different mesh technologies to better understand and recommend their usage. Different network and system designs have different requirements for devices and networks. As such, no one network fulfills all possible network requirements. However, the three mesh networking technologies we are comparing are all aimed at low power and battery-operated mesh networks used for control and monitoring in the home and commercial buildings. Normally, when analyzing data on network performance, we then consider what improvements can be made in the network to improve performance. Because of the limited data available publicly on mesh network performance in large networks today, it is difficult to have industry discussions on possible improvements or changes. For example, in commercial buildings there is concern over: Other network traffic, since there may be many subnets that interfere with each other. Wi-Fi interference from the normal building Wi-Fi infrastructure as these technologies are generally operating in the 2.4 GHz ISM band. Network throughput and latency as well as large network multicast latency and reliability, since multicasts are commonly used for lighting controls in dense office environments and users of the system expect responsiveness in lighting controls. Note: The test results here are limited to comparisons of system performance under normal operating conditions, or under stress as noted in particular tests. This application note does not specifically address system interference or other such effects that have been addressed in other published results. However, testing is done in our Boston facility where there are more than 100 Wi-Fi access points within RF range. The office also has a 300-node Zigbee lighting network that is not part of this testing but is used for normal lighting control. 2.1 Review of Other Performance Testing There are no specific, defined methods for evaluating and reporting large network reliability, scalability, or latency. Silicon Labs has published papers in the past comparing network performances based on network testing. This testing is focused on device behavior and impact on battery life, and network throughput and latency. Large scale multicast testing also requires capturing accurate timing and reliability information from large distributed networks. All testing was conducted using the Silicon Labs Wireless Gecko SoC platform capable of running Zigbee, Thread, Bluetooth Mesh and Proprietary RF protocols to eliminate the device itself as a difference in the testing. Previously published results showed differences between transceivers, network co-processors and System-on-Chip designs. These tests all use System-on-Chip designs. Other papers published on performance include a master s thesis paper on Zigbee network performance, "Performance Evaluation of Zigbee Network for Embedded Electricity Meters, by Kui Liu, which was published by the Department of Electrical Engineering at the Royal Institute of Technology in Sweden. For round trip time in open-air within a single hop, they tested at different ranges with the following results: Table 2.1. Packet Loss for Distance and Average Round Trip Distance Average Round Trip Packet Loss 20 meters 18.0 milliseconds 0% 50 meters 17.9 milliseconds 0% 75 meters 17.9 milliseconds 0.75% 85 meters 18.6 milliseconds 1.65% This test was for 50 bytes of payload unicast at a 100-millisecond interval with security off. Note that Zigbee does not allow security to be turned off. The results show a 1 hop test consistently takes ~18 milliseconds round trip time. This testing was repeated indoors with various interference conditions, but the round trip time for 1 hop did not vary much from the 18 milliseconds. Multi-hop test results were not presented. silabs.com Building a more connected world. Rev. 0.1 3

Test Network and Conditions 3. Test Network and Conditions To minimize variability, device testing can also be performed in fixed topologies where the RF paths are wired together through splitters and attenuators to ensure the topology does not change over time and testing. This method is used for 7-hop testing to ensure network topology. MAC filtering can also be used to achieve the network topology. A typical wired test configuration is shown below: Figure 3.1. Wired RF devices in drawer with splitter and coax cable connectivity Large network testing is best conducted in an open-air environment where device behavior is based on the existing and varying RF conditions. The Silicon Labs office in Boston is used for this open-air testing. silabs.com Building a more connected world. Rev. 0.1 4

Test Network and Conditions 3.1 Office and Test Network Conditions The Boston office consists of a central core with an elevator shaft, other services with an open floor plan on the west end of the building, and offices and conference rooms on the east end. The overall office measures approximately 120 feet by 200 feet. The image below shows the office layout. The darker lines represent hard walls and everything else is split up with cube partitions. Figure 3.2. Boston office layout used for wireless testing The testing devices are installed at various locations around the office. These devices all have Ethernet backchannel connectivity to allow: Firmware updates Command line interface Scripting Timing analysis Packet capture Energy measurements silabs.com Building a more connected world. Rev. 0.1 5

Test Network and Conditions Four EM35x Devices using PoE Six EFR32MG (Mighty Gecko) Devices Multi-band support to allow testing both 2.4 GHz (PCB antenna) and proprietary sub-ghz protocols (external antenna) USB power and Ethernet connectivity Figure 3.3. Typical Testing Cluster The testing clusters are spread throughout the office in both high and low locations, open areas, and enclosed meeting rooms and offices. silabs.com Building a more connected world. Rev. 0.1 6

Test Network and Conditions Figure 3.4. Testing Clusters in the Boston Office This test network has devices added or removed from it on a regular basis, but at the time of this testing it consisted of the following devices: EM35xx devices EFR32 TM Mighty Gecko devices This network represented devices that were used for open-air testing by the networking and software quality assurance teams. All devices are controlled from a central test server and infrastructure, which allows scripted regression testing or manual testing by engineers. silabs.com Building a more connected world. Rev. 0.1 7

Test Network and Conditions 3.2 Wireless Conditions in the Office The Boston office has a full Zigbee lighting control system including motion and lighting sensors and switches. This is not part of the test network and is used as a normal building control system independent of any testing being run. The Boston office is also downtown and, in addition to our existing Wi-Fi infrastructure, there are over 100 Wi-Fi access points within RF distance of the office. The following charts were taken as a snapshot of a normal work day Wi-Fi scan. This is considered the normal Wi-Fi background traffic. Figure 3.5. Wi-Fi Scans on a Normal Day These Wi-Fi scans were taken at the southeast corner office, the west side, and the north side in the main conference room, respectively. These locations showed 62, 104, and 83 Wi-Fi access points within RF range. silabs.com Building a more connected world. Rev. 0.1 8

Test Network and Conditions 3.3 Typical Test Network Within the test network, a given test can be selected and used for a given set of devices. The network is established and devices joined using the Ethernet backchannel to send commands to devices. A typical network during testing is shown below. The black and grey lines show the router connectivity and strength. Figure 3.6. A Typical Network During Testing silabs.com Building a more connected world. Rev. 0.1 9

Testing and Results 4. Testing and Results 4.1 Throughput and Latency The throughput and latency is tested in a controlled network (wired configuration) to test each hop against different packet payloads. The normal configuration is to test to 6 hops. Testing is done with one source node and a series of routing nodes to allow the number of hops to be varied. Source Node Destination Nodes This testing is done using the following configuration: 1. Zigbee application layer messages 2. Packet payload from 50 bytes to up to 300 bytes in 50 byte increments for the latency test 3. Testing is done with security on 4. From 1 to 6 hops 5. Using 2 packets in flight (3 rd send when ack received from 1 st ) 6. Sending as fast as possible given ack timing 7. Measuring round trip latency (source to destination to source) in milliseconds For each of these different mesh networks, packet fragmentation occurs differently as we increase payload size as noted above. The use of larger packet sizes is up to the application layer, but we provide comparison data here to indicate relative performance as fragmentation occurs. silabs.com Building a more connected world. Rev. 0.1 10

Testing and Results 4.1.1 Zigbee Multi-hop Latency Zigbee Round Trip Latency By Packet Size 400 350 300 Latency (milliseconds) 250 200 150 1 hop 2 hop 3 hop 4 hop 5 hop 6 hop 100 50 0 0 50 100 150 200 250 300 Payload Size (bytes) Figure 4.1. Zigbee Round Trip Latency by Packet Size Several items are worth noting in this multi-hop latency test as follows: For 1 hop payload out to 250 bytes, we see very low latency (60 milliseconds). With 50 bytes of payload (contained within 1 packet), Zigbee maintains the latency below 140 milliseconds out to 6 hops. Latency remains below 200 milliseconds except when the payload gets over 150 bytes and the number of hops is more than 5. 4.2 Network Tests versus Network Size Open-air large network testing is required to validate stack performance under less controlled conditions. These networks are configured within normal Silicon Labs office space with normal Wi-Fi interference, other network operations, and building control systems. No attempt is made to isolate these network RF conditions. The networks to be tested for each stack include: Small network: 24 devices Medium network: 1 48 devices Medium network: 2 96 devices Large network: 1 144 devices Large network: 2 192 devices Note: For any of these tests, the specific number of devices is acceptable within +/- 10% of these test network targets for a given set of testing. Testing in this large network is done in SoC mode for devices. These networks are all configured as powered devices unless there is specific testing for sleeping end devices. For each of these networks, the testing will validate reliability and latency for a set of traffic conditions. Testing is intended to be done over 100 messages, but longer runs with 10000 messages for reliability are also done. The same devices were used across the tests to keep the topology and density of the different test runs similar. The actual over-the-air conditions will vary and cannot be controlled in these tests. silabs.com Building a more connected world. Rev. 0.1 11

Testing and Results 4.2.1 Zigbee Large Network Testing Results Zigbee testing was done with the latest version of the Zigbee stack. Zigbee network testing was done with 5, 25, and 50-byte payloads with a normal broadcast transmission interval of 3 seconds. The histograms of results are shown below. 45% Zigbee 5-byte Multicast Latency versus Network Size 40% 35% Percent Received 30% 25% 20% 15% 10% 5% 0% 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 160 170 180 190 200 210 220 230 240 More Latency (milliseconds) 24 48 96 144 192 Nodes Figure 4.2. Zigbee Network Testing with 5-byte Payload Zigbee 25-byte Multicast Latency versus Network Size 40% 35% 30% Percent Received 25% 20% 15% 10% 5% 0% 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 160 170 180 190 200 210 220 230 240 More Latency (milliseconds) 24 48 96 144 192 Nodes Figure 4.3. Zigbee Network Testing with 25-byte Payload silabs.com Building a more connected world. Rev. 0.1 12

Testing and Results Zigbee 50-byte Multicast Latency versus Network Size 70% 60% 50% Percent Received 40% 30% 20% 10% 0% 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 160 170 180 190 200 210 220 230 240 More Latency (milliseconds) 24 48 96 144 192 Nodes Figure 4.4. Zigbee Network Testing with 50-byte Payload The following should be noted from this testing: As packet payload increases, latency to the devices increases. This is expected and normal behavior as it takes more time to transmit larger packets. We see 100% reliability in all of these tests. Note these are 100 broadcasts tests. Larger tests to better measure reliability are available for tests below 100 broadcasts. As the network size grows, we see an increase and spreading out of latency as it takes multiple hops to deliver all the messages. Larger networks also have more contention over the air as all devices are trying to repeat the message. This testing was also conducted with 3 seconds between broadcasts, giving the network some time to recover before sending the next message. Separate testing is shown below on what happens when the interval between broadcasts is decreased. silabs.com Building a more connected world. Rev. 0.1 13

Testing and Results 4.2.2 Zigbee Network Testing with Shorter Interval It is a known issue that the use of broadcasts in 15.4 networks should be minimized as they can cause temporary flooding of the network. Zigbee networks have a broadcast transaction record to prevent broadcasts from being processed that have already been forwarded by the device. This broadcast table must contain a minimum of 9 entries and they are aged out after 9 seconds, meaning on average only 1 broadcast per second is supported in the network. As the above testing was done with 3 seconds between broadcasts, additional testing was done with 2 second and 1 second intervals to show the impact versus network size. The following charts show the latency for a 25-byte payload using 2 second and 1 second intervals between broadcasts. 40% Zigbee Broadcast Latency 25-byte Payload, 2 Sec Interval 35% 30% Percent Received 25% 20% 15% 10% 5% 0% 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 160 170 180 190 200 210 220 230 240 More Latency (milliseconds) 24 48 96 144 192 24 48 96 144 196 Nodes Figure 4.5. Zigbee Broadcast Latency - 25-byte Payload, 2 Sec Interval 25.00% Zigbee Broadcast Latency 25-byte Payload, 1 Sec Interval 20.00% Percent Received 15.00% 10.00% 5.00% 0.00% Latency (milliseconds) 24 48 96 144 196 Nodes Figure 4.6. Zigbee Broadcast Latency - 25-byte Payload, 1 Sec Interval silabs.com Building a more connected world. Rev. 0.1 14

Testing and Results From this reduction in broadcast intervals, we see the following: We still observe 100% reliability in these tests. These tests are 100 sample tests and are not extended tests for reliability. There is little difference between the original 3 second and 2 second intervals. Both operate reasonably well across different network sizes and the average latency in the plots centers around 70-80 milliseconds. When the broadcast interval gets down to 1 second, we see quite a bit more of the traffic heading into larger than 250 millisecond latency indicating the network is getting congested. This supports why Zigbee originally sets the broadcast table size and timeout to average 1 broadcast per second. The average packet is still getting through in a similar time, but there are some that are delayed. The large network extended testing in the following section 4.2.3 Large Network Extended Testing shows this result in more detail. silabs.com Building a more connected world. Rev. 0.1 15

Testing and Results 4.2.3 Large Network Extended Testing Testing with the 192-node network was extended to run over 10000 multicasts to evaluate performance over a longer time period. These tests are also used to show reliability as some failures can be expected in longer tests. For Zigbee, this testing was done with the 3 second interval between broadcasts as follows: 14.00% 192 Node Zigbee EM35x Network Latency and Reliability - 10000 Messages 12.00% 10.00% Percent Received 8.00% 6.00% 4.00% 2.00% 0.00% 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 160 170 180 190 200 210 220 230 240 More Latency (milliseconds) Figure 4.7. 192 Node Zigbee EM35x Network Latency and Reliability We see a very smooth distribution in this larger data set, but there are 0.85% of messages that are greater than 250 milliseconds in delivery time. When we change the histogram, we see the distribution as follows: 90.00% 84.44% 192 Node Zigbee EM35x Network Latency and Reliability - 10000 Messages 80.00% 70.00% 60.00% Percent Received 50.00% 40.00% 30.00% 20.00% 14.66% 10.00% 0.00% 0.07% 0.00% 0.01% 0.58% 0.18% 0.01% 0.02% 0.01% 0.01% 100 200 300 400 500 600 700 800 900 1000 More Latency (milliseconds) Figure 4.8. 192 Node Zigbee EM35x Network Latency and Reliability - Modified silabs.com Building a more connected world. Rev. 0.1 16

Testing and Results This testing shows 99% of packets are delivered under 200 milliseconds, but there is a small group out at 600 and 700 milliseconds and longer. In looking at the data, there are 67 broadcasts from the 10000 (0.67%) that did not leave the originating node for 500 milliseconds. The Zigbee standard has a passive ack timeout of 500 milliseconds, so if the originating device does not hear the broadcast replicated by a neighbor within that time it retransmits. Over the course of the longer testing this occurred a small fraction of the time, but it can be seen in the testing data and will result in a stepped increase in latency for those transmissions. In this large test for Zigbee, 15 packets were lost overall resulting in a 99.9992% reliability for multicast delivery. This is the targeted behavior in our experience with these networks. However, 0.9% of messages were received after 200 milliseconds and if these were counted as failures, the reliability would decrease to 99.1% which is lower than desired. Almost all of these longer latency packets can be attributed to the specification backoff of 500 milliseconds for the initial send noted above. silabs.com Building a more connected world. Rev. 0.1 17

Summary 5. Summary Zigbee shows excellent reliability and latency below the 200-millisecond timing typically required for human interaction with devices. The Zigbee networks are well behaved up through the 192 node networks we have tested, unless the broadcast frequency is pushed too high and then Zigbee shows increased latency. As networks scale up, the added hops and broadcast congestion result in some increased latency. As packet payload increases, the latency across the network also increases, but this is a smaller impact when testing 5, 25, and 50 bytes of payload. When the broadcast interval is lowered to 1 second, there is an increase in maximum latency that may be undesirable for some applications. 5.1 Follow-up Testing Considerations The testing described in this application note requires follow-up tests to further define the device behavior and network operations. The following specific items are noted for follow-up testing: 1. Failure testing can also be added by dropping routers out of this network during these tests to evaluate recovery time and impact on reliability. 2. Testing should be performed with different device types running in System-on-Chip and Network Co-Processor (NCP) modes. Previous testing has revealed some differences between these modes of operation, so this should be further characterized. 5.2 Related Literature This application note has provided information on Zigbee mesh networking. For information on Zigbee and Thread mesh networking, and a comparison of all three technologies, refer to the following application notes: AN1137: Bluetooth Network Performance AN1141: Thread Mesh Network Performance AN1142: Mesh Network Performance Comparison silabs.com Building a more connected world. Rev. 0.1 18

Smart. Connected. Energy-Friendly. Products www.silabs.com/products Quality www.silabs.com/quality Support and Community community.silabs.com Disclaimer Silicon Labs intends to provide customers with the latest, accurate, and in-depth documentation of all peripherals and modules available for system and software implementers using or intending to use the Silicon Labs products. Characterization data, available modules and peripherals, memory sizes and memory addresses refer to each specific device, and "Typical" parameters provided can and do vary in different applications. Application examples described herein are for illustrative purposes only. Silicon Labs reserves the right to make changes without further notice and limitation to product information, specifications, and descriptions herein, and does not give warranties as to the accuracy or completeness of the included information. Silicon Labs shall have no liability for the consequences of use of the information supplied herein. This document does not imply or express copyright licenses granted hereunder to design or fabricate any integrated circuits. The products are not designed or authorized to be used within any Life Support System without the specific written consent of Silicon Labs. A "Life Support System" is any product or system intended to support or sustain life and/or health, which, if it fails, can be reasonably expected to result in significant personal injury or death. Silicon Labs products are not designed or authorized for military applications. Silicon Labs products shall under no circumstances be used in weapons of mass destruction including (but not limited to) nuclear, biological or chemical weapons, or missiles capable of delivering such weapons. Trademark Information Silicon Laboratories Inc., Silicon Laboratories, Silicon Labs, SiLabs and the Silicon Labs logo, Bluegiga, Bluegiga Logo, Clockbuilder, CMEMS, DSPLL, EFM, EFM32, EFR, Ember, Energy Micro, Energy Micro logo and combinations thereof, "the world s most energy friendly microcontrollers", Ember, EZLink, EZRadio, EZRadioPRO, Gecko, ISOmodem, Micrium, Precision32, ProSLIC, Simplicity Studio, SiPHY, Telegesis, the Telegesis Logo, USBXpress, Zentri and others are trademarks or registered trademarks of Silicon Labs. ARM, CORTEX, Cortex-M3 and THUMB are trademarks or registered trademarks of ARM Holdings. Keil is a registered trademark of ARM Limited. All other products or brand names mentioned herein are trademarks of their respective holders. Silicon Laboratories Inc. 400 West Cesar Chavez Austin, TX 78701 USA http://www.silabs.com