Network Working Group G. Dudley Request for Comments: 2353 IBM Category: Informational May 1998 APPN/HPR in IP Networks APPN Implementers' Workshop Closed Pages Document Status of this Memo This memo provides information for the Internet community. It does not specify an Internet standard of any kind. Distribution of this memo is unlimited. Copyright Notice Copyright (C) The Internet Society (1998). All Rights Reserved. Table of Contents 1.0 Introduction . . . . . . . . . . . . . . . . . . . . . . . 2 1.1 Requirements . . . . . . . . . . . . . . . . . . . . . . . 3 2.0 IP as a Data Link Control (DLC) for HPR . . . . . . . . . 3 2.1 Use of UDP and IP . . . . . . . . . . . . . . . . . . . . 4 2.2 Node Structure . . . . . . . . . . . . . . . . . . . . . . 5 2.3 Logical Link Control (LLC) Used for IP . . . . . . . . . . 8 2.3.1 LDLC Liveness . . . . . . . . . . . . . . . . . . . . 8 2.3.1.1 Option to Reduce Liveness Traffic . . . . . . . . 9 2.4 IP Port Activation . . . . . . . . . . . . . . . . . . . . 10 2.4.1 Maximum BTU Sizes for HPR/IP . . . . . . . . . . . . . 12 2.5 IP Transmission Groups (TGs) . . . . . . . . . . . . . . . 12 2.5.1 Regular TGs . . . . . . . . . . . . . . . . . . . . . 12 2.5.1.1 Limited Resources and Auto-Activation . . . . . . 19 2.5.2 IP Connection Networks . . . . . . . . . . . . . . . . 19 2.5.2.1 Establishing IP Connection Networks . . . . . . . 20 2.5.2.2 IP Connection Network Parameters . . . . . . . . . 22 2.5.2.3 Sharing of TGs . . . . . . . . . . . . . . . . . . 24 2.5.2.4 Minimizing RSCV Length . . . . . . . . . . . . . . 25 2.5.3 XID Changes . . . . . . . . . . . . . . . . . . . . . 26 2.5.4 Unsuccessful IP Link Activation . . . . . . . . . . . 30 2.6 IP Throughput Characteristics . . . . . . . . . . . . . . 34 2.6.1 IP Prioritization . . . . . . . . . . . . . . . . . . 34 2.6.2 APPN Transmission Priority and COS . . . . . . . . . . 36 2.6.3 Default TG Characteristics . . . . . . . . . . . . . . 36 2.6.4 SNA-Defined COS Tables . . . . . . . . . . . . . . . . 38 2.6.5 Route Setup over HPR/IP links . . . . . . . . . . . . 39 2.6.6 Access Link Queueing . . . . . . . . . . . . . . . . . 39 2.7 Port Link Activation Limits . . . . . . . . . . . . . . . 40 Dudley Informational [Page 1] RFC 2353 APPN/HPR in IP Networks May 1998 2.8 Network Management . . . . . . . . . . . . . . . . . . . . 40 2.9 IPv4-to-IPv6 Migration . . . . . . . . . . . . . . . . . . 41 3.0 References . . . . . . . . . . . . . . . . . . . . . . . . 42 4.0 Security Considerations . . . . . . . . . . . . . . . . . 43 5.0 Author's Address . . . . . . . . . . . . . . . . . . . . . 44 6.0 Appendix - Packet Format . . . . . . . . . . . . . . . . . 45 6.1 HPR Use of IP Formats . . . . . . . . . . . . . . . . . . 45 6.1.1 IP Format for LLC Commands and Responses . . . . . . . 45 6.1.2 IP Format for NLPs in UI Frames . . . . . . . . . . . 46 7.0 Full Copyright Statement . . . . . . . . . . . . . . . . . 48 1.0 Introduction The APPN Implementers' Workshop (AIW) is an industry-wide consortium of networking vendors that develops Advanced Peer-to-Peer Networking(R) (APPN(R)) standards and other standards related to Systems Network Architecture (SNA), and facilitates high quality, fully interoperable APPN and SNA internetworking products. The AIW approved Closed Pages (CP) status for the architecture in this document on December 2, 1997, and, as a result, the architecture was added to the AIW architecture of record. A CP-level document is sufficiently detailed that implementing products will be able to interoperate; it contains a clear and complete specification of all necessary changes to the architecture of record. However, the AIW has procedures by which the architecture may be modified, and the AIW is open to suggestions from the internet community. The architecture for APPN nodes is specified in "Systems Network Architecture Advanced Peer-to-Peer Networking Architecture Reference" [1]. A set of APPN enhancements for High Performance Routing (HPR) is specified in "Systems Network Architecture Advanced Peer-to-Peer Networking High Performance Routing Architecture Reference, Version 3.0" [2]. The formats associated with these architectures are specified in "Systems Network Architecture Formats" [3]. This memo assumes the reader is familiar with these specifications. This memo defines a method with which HPR nodes can use IP networks for communication, and the enhancements to APPN required by this method. This memo also describes an option set that allows the use of the APPN connection network model to allow HPR nodes to use IP networks for communication without having to predefine link connections. (R) 'Advanced Peer-to-Peer Networking' and 'APPN' are trademarks of the IBM Corporation. Dudley Informational [Page 2] RFC 2353 APPN/HPR in IP Networks May 1998 1.1 Requirements The following are the requirements for the architecture specified in this memo: 1. Facilitate APPN product interoperation in IP networks by documenting agreements such as the choice of the logical link control (LLC). 2. Reduce system definition (e.g., by extending the connection network model to IP networks) -- Connection network support is an optional function. 3. Use class of service (COS) to retain existing path selection and transmission priority services in IP networks; extend transmission priority function to include IP networks. 4. Allow customers the flexibility to design their networks for low cost and high performance. 5. Use HPR functions to improve both availability and scalability over existing integration techniques such as Data Link Switching (DLSw) which is specified in RFC 1795 [4] and RFC 2166 [5]. 2.0 IP as a Data Link Control (DLC) for HPR This memo specifies the use of IP and UDP as a new DLC that can be supported by APPN nodes with the three HPR option sets: HPR (option set 1400), Rapid Transport Protocol (RTP) (option set 1401), and Control Flows over RTP (option set 1402). Logical Data Link Control (LDLC) Support (option set 2006) is also a prerequisite. RTP is a connection-oriented, full-duplex protocol designed to transport data in high-speed networks. HPR uses RTP connections to transport SNA session traffic. RTP provides reliability (i.e., error recovery via selective retransmission), in-order delivery (i.e., a first-in-first-out [FIFO] service provided by resequencing data that arrives out of order), and adaptive rate-based (ARB) flow/congestion control. Because RTP provides these functions on an end-to-end basis, it eliminates the need for these functions on the link level along the path of the connection. The result is improved overall performance for HPR. For a more complete description of RTP, see Appendix F of [2]. This new DLC (referred to as the native IP DLC) allows customers to take advantage of APPN/HPR functions such as class of service (COS) and ARB flow/congestion control in the IP environment. HPR links established over the native IP DLC are referred to as HPR/IP links. Dudley Informational [Page 3] RFC 2353 APPN/HPR in IP Networks May 1998 The following sections describe in detail the considerations and enhancements associated with the native IP DLC. 2.1 Use of UDP and IP The native IP DLC will use the User Datagram Protocol (UDP) defined in RFC 768 [6] and the Internet Protocol (IP) version 4 defined in RFC 791 [7]. Typically, access to UDP is provided by a sockets API. UDP provides an unreliable connectionless delivery service using IP to transport messages between nodes. UDP has the ability to distinguish among multiple destinations within a given node, and allows port-number- based prioritization in the IP network. UDP provides detection of corrupted packets, a function required by HPR. Higher-layer protocols such as HPR are responsible for handling problems of message loss, duplication, delay, out-of-order delivery, and loss of connectivity. UDP is adequate because HPR uses RTP to provide end- to-end error recovery and in-order delivery; in addition, LDLC detects loss of connectivity. The Transmission Control Protocol (TCP) was not chosen for the native IP DLC because the additional services provided by TCP such as error recovery are not needed. Furthermore, the termination of TCP connections would require additional node resources (control blocks, buffers, timers, and retransmit queues) and would, thereby, reduce the scalability of the design. The UDP header has four two-byte fields. The UDP Destination Port is a 16-bit field that contains the UDP protocol port number used to demultiplex datagrams at the destination. The UDP Source Port is a 16-bit field that contains the UDP protocol port number that specifies the port to which replies should be sent when other information is not available. A zero setting indicates that no source port number information is being provided. When used with the native IP DLC, this field is not used to convey a port number for replies; moreover, the zero setting is not used. IANA has registered port numbers 12000 through 12004 for use in these two fields by the native IP DLC; use of these port numbers allows prioritization in the IP network. For more details of the use of these fields, see 2.6.1, "IP Prioritization" on page 28. The UDP Checksum is a 16-bit optional field that provides coverage of the UDP header and the user data; it also provides coverage of a pseudo-header that contains the source and destination IP addresses. The UDP checksum is used to guarantee that the data has arrived intact at the intended receiver. When the UDP checksum is set to Dudley Informational [Page 4] RFC 2353 APPN/HPR in IP Networks May 1998 zero, it indicates that the checksum was not calculated and should not be checked by the receiver. Use of the checksum is recommended for use with the native IP DLC. IP provides an unreliable, connectionless delivery mechanism. The IP protocol defines the basic unit of data transfer through the IP network, and performs the routing function (i.e., choosing the path over which data will be sent). In addition, IP characterizes how "hosts" and "gateways" should process packets, the circumstances under which error messages are generated, and the conditions under which packets are discarded. An IP version 4 header contains an 8- bit Type of Service field that specifies how the datagram should be handled. As defined in RFC 1349 [8], the type-of-service byte contains two defined fields. The 3-bit precedence field allows senders to indicate the priority of each datagram. The 4-bit type of service field indicates how the network should make tradeoffs between throughput, delay, reliability, and cost. The 8-bit Protocol field specifies which higher-level protocol created the datagram. When used with the native IP DLC, this field is set to 17 which indicates the higher-layer protocol is UDP. 2.2 Node Structure Figure 1 on page 6 shows a possible node functional decomposition for transport of HPR traffic across an IP network. There will be variations in different platforms based on platform characteristics. The native IP DLC includes a DLC manager, one LDLC component for each link, and a link demultiplexor. Because UDP is a connectionless delivery service, there is no need for HPR to activate and deactivate lower-level connections. The DLC manager activates and deactivates a link demultiplexor for each port and an instance of LDLC for each link established in an IP network. Multiple links (e.g., one defined link and one dynamic link for connection network traffic) may be established between a pair of IP addresses. Each link is identified by the source and destination IP addresses in the IP header and the source and destination service access point (SAP) addresses in the IEEE 802.2 LLC header (see 6.0, "Appendix - Packet Format" on page 37); the link demultiplexor passes incoming packets to the correct instance of LDLC based on these identifiers. Moreover, the IP address pair associated with an active link and used in the IP header may not change. LDLC also provides other functions (for example, reliable delivery of Exchange Identification [XID] commands). Error recovery for HPR RTP packets is provided by the protocols between the RTP endpoints. Dudley Informational [Page 5] RFC 2353 APPN/HPR in IP Networks May 1998 The network control layer (NCL) uses the automatic network routing (ANR) information in the HPR network header to either pass incoming packets to RTP or an outgoing link. All components are shown as single entities, but the number of logical instances of each is as follows: o DLC manager -- 1 per node o LDLC -- 1 per link o Link demultiplexor -- 1 per port o NCL -- 1 per node (or 1 per port for efficiency) o RTP -- 1 per RTP connection o UDP -- 1 per port o IP -- 1 per port Products are free to implement other structures. Products implementing other structures will need to make the appropriate modifications to the algorithms and protocol boundaries shown in this document. Dudley Informational [Page 6] RFC 2353 APPN/HPR in IP Networks May 1998 -------------------------------------------------------------------- -* *-------------* *-------* | |Configuration| | Path | | | Services | |Control| | *-------------* *-------* | A A A | | | | | | | V | | | *-----* | APPN/HPR | | | RTP | | | | *-----* | | | A | | | | | | | V | | | *-----* | | | | NCL | | | | *-----* | | *------------* A -* | | | V V V -* *---------* *---------* | | DLC |--->| LDLC | | | manager | | | | *---------* *---------* | | A | | IP DLC *-----------* | *----* | V | | | *---------* | | | LINK | | | | DEMUX | | | *---------* | | A *-* -* | | | V *---------* | UDP | *---------* A | V *---------* | IP | *---------* -------------------------------------------------------------------- Figure 1. HPR/IP Node Structure Dudley Informational [Page 7] RFC 2353 APPN/HPR in IP Networks May 1998 2.3 Logical Link Control (LLC) Used for IP Logical Data Link Control (LDLC) is used by the native IP DLC. LDLC is defined in [2]. LDLC uses a subset of the services defined by IEEE 802.2 LLC type 2 (LLC2). LDLC uses only the TEST, XID, DISC, DM, and UI frames. LDLC was defined to be used in conjunction with HPR (with the HPR Control Flows over RTP option set 1402) over reliable links that do not require link-level error recovery. Most frame loss in IP networks (and the underlying frame networks) is due to congestion, not problems with the facilities. When LDLC is used on a link, no link-level error recovery is available; as a result, only RTP traffic is supported by the native IP DLC. Using LDLC eliminates the need for LLC2 and its associated cost (adapter storage, longer path length, etc.). 2.3.1 LDLC Liveness LDLC liveness (using the LDLC TEST command and response) is required when the underlying subnetwork does not provide notification of connection outage. Because UDP is connectionless, it does not provide outage notification; as a result, LDLC liveness is required for HPR/IP links. Liveness should be sent periodically on active links except as described in the following subsection when the option to reduce liveness traffic is implemented. The default liveness timer period is 10 seconds. When the defaults for the liveness timer and retry timer (15 seconds) are used, the period between liveness tests is smaller than the time required to detect failure (retry count multiplied by retry timer period) and may be smaller than the time for liveness to complete successfully (on the order of round-trip delay). When liveness is implemented as specified in the LDLC finite-state machine (see [2]) this is not a problem because the liveness protocol works as follows: The liveness timer is for a single link. The timer is started when the link is first activated and each time a liveness test completes successfully. When the timer expires, a liveness test is performed. When the link is operational, the period between liveness tests is on the order of the liveness timer period plus the round-trip delay. For each implementation, it is necessary to check if the liveness protocol will work in a satisfactory manner with the default settings for the liveness and retry timers. If, for example, the liveness timer is restarted immediately upon expiration, then a different default for the liveness timer should be used. Dudley Informational [Page 8] RFC 2353 APPN/HPR in IP Networks May 1998 2.3.1.1 Option to Reduce Liveness Traffic In some environments, it is advantageous to reduce the amount of liveness traffic when the link is otherwise idle. (For example, this could allow underlying facilities to be temporarily deactivated when not needed.) As an option, implementations may choose not to send liveness when the link is idle (i.e., when data was neither sent nor received over the link while the liveness timer was running). (If the implementation is not aware of whether data has been received, liveness testing may be stopped while data is not being sent.) However, the RTP connections also have a liveness mechanism which will generate traffic. Some implementations of RTP will allow setting a large value for the ALIVE timer, thus reducing the amount of RTP liveness traffic. If LDLC liveness is turned off while the link is idle, one side of the link may detect a link failure much earlier than the other. This can cause the following problems: o If a node that is aware of a link failure attempts to reactivate the link, the partner node (unaware of the link failure) may reject the activation as an unsupported parallel link between the two ports. o If a node that is unaware of an earlier link failure sends data (including new session activations) on the link, it may be discarded by a node that detected the earlier failure and deactivated the link. As a result, session activations would fail. The mechanisms described below can be used to remedy these problems. These mechanisms are needed only in a node not sending liveness when the link is idle; thus, they would not be required of a node not implementing this option that just happened to be adjacent to a node implementing the option. o (Mandatory unless the node supports multiple active defined links between a pair of HPR/IP ports and supports multiple active dynamic links between a pair of HPR/IP ports.) Anytime a node rejects the activation of an HPR/IP link as an unsupported parallel link between a pair of HPR/IP ports (sense data X'10160045' or X'10160046'), it should perform liveness on any active link between the two ports that is using a different SAP pair. Thus, if the activation was not for a parallel link but rather was a reactivation because one of these active links had failed, the failed link will be detected. (If the SAP pair for the link being activated matches the SAP pair for an active link, a liveness test would succeed because the adjacent node would Dudley Informational [Page 9] RFC 2353 APPN/HPR in IP Networks May 1998 respond for the link being activated.) A simple way to implement this function is for LDLC, upon receiving an activation XID, to run liveness on all active links with a matching IP address pair and a different SAP pair. o (Mandatory) Anytime a node receives an activation XID with an IP address pair and a SAP pair that match those of an active link, it should deactivate the active link and allow it to be reestablished. A timer is required to prevent stray XIDs from deactivating an active link. o (Recommended) A node should attempt to reactivate an HPR/IP link before acting on an LDLC-detected failure. This mechanism is helpful in preventing session activation failures in scenarios where the other side detected a link failure earlier, but the network has recovered. 2.4 IP Port Activation The node operator (NO) creates a native IP DLC by issuing DEFINE_DLC(RQ) (containing customer-configured parameters) and START_DLC(RQ) commands to the node operator facility (NOF). NOF, in turn, passes DEFINE_DLC(RQ) and START_DLC(RQ) signals to configuration services (CS), and CS creates the DLC manager. Then, the node operator can define a port by issuing DEFINE_PORT(RQ) (also containing customer-configured parameters) to NOF with NOF passing the associated signal to CS. A node with adapters attached to multiple IP subnetworks may represent the multiple adapters as a single HPR/IP port. However, in that case, the node associates a single IP address with that port. RFC 1122 [9] requires that a node with multiple adapters be able to use the same source IP address on outgoing UDP packets regardless of the adapter used for transmission. Dudley Informational [Page 10] RFC 2353 APPN/HPR in IP Networks May 1998 *----------------------------------------------* | NOF CS DLC | *----------------------------------------------* . DEFINE_DLC(RQ) . 1 o----------------->o . DEFINE_DLC(RSP) | 2 o<-----------------* . START_DLC(RQ) . create 3 o----------------->o------------------->o . START_DLC(RSP) | . 4 o<-----------------* . . DEFINE_PORT(RQ) . . 5 o----------------->o . . DEFINE_PORT(RSP) | . 6 o<-----------------* . Figure 2. IP Port Activation The following parameters are received in DEFINE_PORT(RQ): o Port name o DLC name o Port type (if IP connection networks are supported, set to shared access transport facility [SATF]; otherwise, set to switched) o Link station role (set to negotiable) o Maximum receive BTU size (default is 1461 [1492 less an allowance for the IP, UDP, and LLC headers]) o Maximum send BTU size (default is 1461 [1492 less an allowance for the IP, UDP, and LLC headers]) o Link activation limits (total, inbound, and outbound) o IPv4 supported (set to yes) o The local IPv4 address (required if IPv4 is supported) o IPv6 supported (set to no; may be set to yes in the future; see 2.9, "IPv4-to-IPv6 Migration" on page 35) o The local IPv6 address (required if IPv6 is supported) o Retry count for LDLC (default is 3) Dudley Informational [Page 11] RFC 2353 APPN/HPR in IP Networks May 1998 o Retry timer period for LDLC (default is 15 seconds; a smaller value such as 10 seconds can be used for a campus network) o LDLC liveness timer period (default is 10 seconds; see 2.3.1, "LDLC Liveness" on page 7) o IP precedence (the setting of the 3-bit field within the Type of Service byte of the IP header for the LLC commands such as XID and for each of the APPN transmission priorities; the defaults are given in 2.6.1, "IP Prioritization" on page 28.) 2.4.1 Maximum BTU Sizes for HPR/IP When IP datagrams are larger than the underlying physical links support, IP performs fragmentation. When HPR/IP links are established, the default maximum basic transmission unit (BTU) sizes are 1461 bytes, which corresponds to the typical IP maximum transmission unit (MTU) size of 1492 bytes supported by routers on token-ring networks. 1461 is 1492 less 20 bytes for the IP header, 8 bytes for the UDP header, and 3 bytes for the IEEE 802.2 LLC header. The IP header is larger than 20 bytes when optional fields are included; smaller maximum BTU sizes should be configured if optional IP header fields are used in the IP network. For IPv6, the default is reduced to 1441 bytes to allow for the typical IPv6 header size of 40 bytes. Smaller maximum BTU sizes (but not less than 768) should be used to avoid fragmentation when necessary. Larger BTU sizes should be used to improve performance when the customer's IP network supports a sufficiently large IP MTU size. The maximum receive and send BTU sizes are passed to CS in DEFINE_PORT(RQ). These maximum BTU sizes can be overridden in DEFINE_CN_TG(RQ) or DEFINE_LS(RQ). The Flags field in the IP header should be set to allow fragmentation. Some products will not be able to control the setting of the bit allowing fragmentation; in that case, fragmentation will most likely be allowed. Although fragmentation is slow and prevents prioritization based on UDP port numbers, it does allow connectivity across paths with small MTU sizes. 2.5 IP Transmission Groups (TGs) 2.5.1 Regular TGs Regular HPR TGs may be established in IP networks using the native IP DLC architecture. Each of these TGs is composed of one or more HPR/IP links. Configuration services (CS) identifies the TG with the destination control point (CP) name and TG number; the destination CP Dudley Informational [Page 12] RFC 2353 APPN/HPR in IP Networks May 1998 name may be configured or learned via XID, and the TG number, which may be configured, is negotiated via XID. For auto-activatable links, the destination CP name and TG number must be configured. When multiple links (dynamic or defined) are established between a pair of IP ports (each associated with a single IP address), an incoming packet can be mapped to its associated link using the IP address pair and the service access point (SAP) address pair. If a node receives an activation XID for a defined link with an IP address pair and a SAP pair that are the same as for an active defined link, that node can assume that the link has failed and that the partner node is reactivating the link. In such a case as an optimization, the node receiving the XID can take down the active link and allow the link to be reestablished in the IP network. Because UDP packets can arrive out of order, implementation of this optimization requires the use of a timer to prevent a stray XID from deactivating an active link. Support for multiple defined links between a pair of HPR/IP ports is optional. There is currently no value in defining multiple HPR/IP links between a pair of ports. In the future if HPR/IP support for the Resource ReSerVation Protocol (RSVP) [10] is defined, it may be advantageous to define such parallel links to segregate traffic by COS on RSVP "sessions." Using RSVP, HPR would be able to reserve bandwidth in IP networks. An HPR logical link would be mapped to an RSVP "session" that would likely be identified by either a specific application-provided UDP port number or a dynamically-assigned UDP port number. When multiple defined HPR/IP links between ports are not supported, an incoming activation for a defined HPR/IP link may be rejected with sense data X'10160045' if an active defined HPR/IP link already exists between the ports. If the SAP pair in the activation XID matches the SAP pair for the existing link, the optimization described above may be used instead. If parallel defined HPR/IP links between ports are not supported, an incoming activation XID is mapped to the defined link station (if it exists) associated with the port on the adjacent node using the source IP address in the incoming activation XID. This source IP address should be the same as the destination IP address associated with the matching defined link station. (They may not be the same if the adjacent node has multiple IP addresses, and the configuration was not coordinated correctly.) If parallel HPR/IP links between ports are supported, multiple defined link stations may be associated with the port on the adjacent node. In that case, predefined TG numbers (see "Partitioning the TG Dudley Informational [Page 13] RFC 2353 APPN/HPR in IP Networks May 1998 Number Space" in Chapter 9 Configuration Services of [1]) may be used to map the XID to a specific link station. However, because the same TG characteristics may be used for all HPR/IP links between a given pair of ports, all the link stations associated with the port in the adjacent node should be equivalent; as a result, TG number negotiation using negotiable TG numbers may be used. In the future, if multiple HPR/IP links with different characteristics are defined between a pair of ports using RSVP, defined link stations will need sufficient configured information to be matched with incoming XIDs. (Correct matching of an incoming XID to a defined link station allows CS to provide the correct TG characteristics to topology and routing services (TRS).) At that time CS will do the mapping based on both the IP address of the adjacent node and a predefined TG number. The node initiating link activation knows which link it is activating. Some parameters sent in prenegotiation XID are defined in the regular link station configuration and not allowed to change in following negotiation-proceeding XIDs. To allow for forward migration to RSVP, when a regular TG is activated in an IP network, the node receiving the first XID (i.e., the node not initiating link activation) must also understand which defined link station is being activated before sending a prenegotiation XID in order to correctly set parameters that cannot change. For this reason, the node initiating link activation will indicate the TG number in prenegotiation XIDs by including a TG Descriptor (X'46') control vector containing a TG Identifier (X'80') subfield. Furthermore, the node receiving the first XID will force the node activating the link to send the first prenegotiation XID by responding to null XIDs with null XIDs. To prevent potential deadlocks, the node receiving the first XID has a limit (the LDLC retry count can be used) on the number of null XIDs it will send. Once this limit is reached, that node will send an XID with an XID Negotiation Error (X'22') control vector in response to a null XID; sense data X'0809003A' is included in the control vector to indicate unexpected null XID. If the node that received the first XID receives a prenegotiation XID without the TG Identifier subfield, it will send an XID with an XID Negotiation Error control vector to reject the link connection; sense data X'088C4680' is included in the control vector to indicate the subfield was missing. For a regular TG, the TG parameters are provided by the node operator based on customer configuration in DEFINE_PORT(RQ) and DEFINE_LS(RQ). The following parameters are supplied in DEFINE_LS(RQ) for HPR/IP links: Dudley Informational [Page 14] RFC 2353 APPN/HPR in IP Networks May 1998 o The destination IP host name (this parameter can usually be mapped to the destination IP address): If the link is not activated at node initialization, the IP host name should be mapped to an IP address, and the IP address should be stored with the link station definition. This is required to allow an incoming link activation to be matched with the link station definition. If the adjacent node activates the link with a different IP address (e.g., it could have multiple ports), it will not be possible to match the link activation with the link station definition, and the default parameters specified in the local port definition will be used. o The destination IP version (set to version 4, support for version 6 may be required in the future; this parameter is only required if the address and version cannot be determined using the destination IP host name.) o The destination IP address (in the format specified by the destination IP version; this parameter is only required if the address cannot be determined using the destination IP host name.) o Source service access point address (SSAP) used for XID, TEST, DISC, and DM (default is X'04'; other values may be specified when multiple links between a pair of IP addresses are defined) o Destination service access point address (DSAP) used for XID, TEST, DISC, and DM (default is X'04') o Source service access point address (SSAP) used for HPR network layer packets (NLPs) (default is X'C8'; other values may be specified when multiple links between a pair of IP addresses are defined.) o Maximum receive BTU size (default is 1461; this parameter is used to override the setting in DEFINE_PORT.) o Maximum send BTU size (default is 1461; this parameter is used to override the setting in DEFINE_PORT.) o IP precedence (the setting of the 3-bit field within the Type of Service byte of the IP header for LLC commands such as XID and for each of the APPN transmission priorities; the defaults are given in 2.6.1, "IP Prioritization" on page 28; this parameter is used to override the settings in DEFINE_PORT) o Shareable with connection network traffic (default is yes for non-RSVP links) Dudley Informational [Page 15] RFC 2353 APPN/HPR in IP Networks May 1998 o Retry count for LDLC (default is 3; this parameter is used to override the setting in DEFINE_PORT) o Retry timer period for LDLC (default is 15 seconds; a smaller value such as 10 seconds can be used for a campus link; this parameter is used to override the setting in DEFINE_PORT) o LDLC liveness timer period (default is 10 seconds; this parameter is to override the setting in DEFINE_PORT; see 2.3.1, "LDLC ness" on page 7) o Auto-activation supported (default is no; may be set to yes when the local node has switched access to the IP network) o Limited resource (default is to set in concert with auto- activation supported) o Limited resource liveness timer (default is 45 sec.) o Port name o Adjacent CP name (optional) o Local CP-CP sessions supported o Defined TG number (optional) o TG characteristics The following figures show the activation and deactivation of regular TGs. Dudley Informational [Page 16] RFC 2353 APPN/HPR in IP Networks May 1998 *------------------------------------------------------------------* |CS DLC LDLC DMUX UDP| *------------------------------------------------------------------* . . . . .CONNECT_OUT(RQ) . create . . o--------------->o-------------->o . . . | new LDLC . . . o----------------------------->o . CONNECT_OUT(+RSP)| . . . o<---------------* . . . | XID . XID(CMD) . XID *------------------------------->o----------------------------->o-----> Figure 3. Regular TG Activation (outgoing) In Figure 3 upon receiving START_LS(RQ) from NOF, CS starts the link activation process by sending CONNECT_OUT(RQ) to the DLC manager. The DLC manager creates an instance of LDLC for the link, informs the link demultiplexor, and sends CONNECT_OUT(+RSP) to CS. Then, CS starts the activation XID exchange. *------------------------------------------------------------------* |CS DLC LDLC DMUX UDP| *------------------------------------------------------------------* . . . . . CONNECT_IN(RQ) . XID(CMD) . XID . XID o<---------------o<-----------------------------o<--------------o<----- | CONNECT_IN(RSP). create . . *--------------->o-------------->o . . . | new LDLC . . . o----------------------------->o . . | XID(CMD) . . . . *-------------->o . . . XID | . . o<-------------------------------* . . | XID . XID(RSP) . XID *------------------------------->o----------------------------->o-----> Figure 4. Regular TG Activation (incoming) In Figure 4, when an XID is received for a new link, it is passed to the DLC manager. The DLC manager sends CONNECT_IN(RQ) to notify CS of the incoming link activation, and CS sends CONNECT_IN(+RSP) accepting the link activation. The DLC manager then creates a new instance of LDLC, informs the link demultiplexor, and forwards the XID to to CS via LDLC. CS then responds by sending an XID to the adjacent node. Dudley Informational [Page 17] RFC 2353 APPN/HPR in IP Networks May 1998 The two following figures show normal TG deactivation (outgoing and incoming). *------------------------------------------------------------------* |CS DLC LDLC DMUX UDP| *------------------------------------------------------------------* . . . . . . DEACT . DISC . DISC o------------------------------->o----------------------------->o-----> . DEACT . DM . DM . DM o<-------------------------------o<-------------o<--------------o<----- | DISCONNECT(RQ) . destroy . . . *--------------->o-------------->o . . DISCONNECT(RSP) | . . o<---------------* . . Figure 5. Regular TG Deactivation (outgoing) In Figure 5 upon receiving STOP_LS(RQ) from NOF, CS sends DEACT to notify the partner node that the HPR link is being deactivated. When the response is received, CS sends DISCONNECT(RQ) to the DLC manager, and the DLC manager deactivates the instance of LDLC. Upon receiving DISCONNECT(RSP), CS sends STOP_LS(RSP) to NOF. *------------------------------------------------------------------* |CS DLC LDLC DMUX UDP| *------------------------------------------------------------------* . . . . . . DEACT . DISC . DISC . DISC o<-------------------------------o<-------------o<--------------o<----- | . | DM . DM | . *----------------------------->o-----> | DISCONNECT(RQ) . destroy . . . *--------------->o-------------->o . . .DISCONNECT(RSP) | . . o<---------------* . . Figure 6. Regular TG Deactivation (incoming) In Figure 6, when an adjacent node deactivates a TG, the local node receives a DISC. CS sends STOP_LS(IND) to NOF. Because IP is connectionless, the DLC manager is not aware that the link has been deactivated. For that reason, CS also needs to send DISCONNECT(RQ) to the DLC manager; the DLC manager deactivates the instance of LDLC. Dudley Informational [Page 18] RFC 2353 APPN/HPR in IP Networks May 1998 2.5.1.1 Limited Resources and Auto-Activation To reduce tariff charges, the APPN architecture supports the definition of switched links as limited resources. A limited- resource link is deactivated when there are no sessions traversing the link. Intermediate HPR nodes are not aware of sessions between logical units (referred to as LU-LU sessions) carried in crossing RTP connections; in HPR nodes, limited-resource TGs are deactivated when no traffic is detected for some period of time. Furthermore, APPN links may be defined as auto-activatable. Auto-activatable links are activated when a new session has been routed across the link. An HPR node may have access to an IP network via a switched access link. In such environments, it may be advisable for customers to define regular HPR/IP links as limited resources and as being auto- activatable. 2.5.2 IP Connection Networks Connection network support for IP networks (option set 2010), is described in this section. APPN architecture defines single link TGs across the point-to-point lines connecting APPN nodes. The natural extension of this model would be to define a TG between each pair of nodes connected to a shared access transport facility (SATF) such as a LAN or IP network. However, the high cost of the system definition of such a mesh of TGs is prohibitive for a network of more than a few nodes. For that reason, the APPN connection network model was devised to reduce the system definition required to establish TGs between APPN nodes. Other TGs may be defined through the SATF which are not part of the connection network. Such TGs (referred to as regular TGs in this document) are required for sessions between control points (referred to as CP-CP sessions) but may also be used for LU-LU sessions. In the connection network model, a virtual routing node (VRN) is defined to represent the SATF. Each node attached to the SATF defines a single TG to the VRN rather than TGs to all other attached nodes. Topology and routing services (TRS) specifies that a session is to be routed between two nodes across a connection network by including the connection network TGs between each of those nodes and the VRN in the Route Selection control vector (RSCV). When a network node has a TG to a VRN, the network topology information associated with that TG includes DLC signaling information required to establish connectivity to that node across the SATF. For an end node, the DLC signaling Dudley Informational [Page 19] RFC 2353 APPN/HPR in IP Networks May 1998 information is returned as part of the normal directory services (DS) process. TRS includes the DLC signaling information for TGs across connection networks in RSCVs. CS creates a dynamic link station when the next hop in the RSCV of an ACTIVATE_ROUTE signal received from session services (SS) is a connection network TG or when an adjacent node initiates link activation upon receiving such an ACTIVATE_ROUTE signal. Dynamic link stations are normally treated as limited resources, which means they are deactivated when no sessions are using them. CP-CP sessions are not supported on connections using dynamic link stations because CP-CP sessions normally need to be kept up continuously. Establishment of a link across a connection network normally requires the use of CP-CP sessions to determine the destination IP address. Because CP-CP sessions must flow across regular TGs, the definition of a connection network does not eliminate the need to define regular TGs as well. Normally, one connection network is defined on a LAN (i.e., one VRN is defined.) For an environment with several interconnected campus IP networks, a single wide-area connection network can be defined; in addition, separate connection networks can be defined between the nodes connected to each campus IP network. 2.5.2.1 Establishing IP Connection Networks Once the port is defined, a connection network can be defined on the port. In order to support multiple TGs from a port to a VRN, the connection network is defined by the following process: 1. A connection network and its associated VRN are defined on the port. This is accomplished by the node operator issuing a DEFINE_CONNECTION_NETWORK(RQ) command to NOF and NOF passing a DEFINE_CN(RQ) signal to CS. 2. Each TG from the port to the VRN is defined by the node operator issuing DEFINE_CONNECTION_NETWORK_TG(RQ) to NOF and NOF passing DEFINE_CN_TG(RQ) to CS. Prior to implementation of Resource ReSerVation Protocol (RSVP) support, only one connection network TG between a port and a VRN is required. In that case, product support for the DEFINE_CN_T