솔라리스 10 버전에서 IPMP는 아래처럼 기존에 IP주소가 3개를 사용했던 것과 달리 1개로 가능합니다.
실제 vmware상에서 테스트 해 본 결과 상당히 만족스런 결과를 얻었습니다. ^^
pcn interface는 공식적으로 ipmp를 지원하지 않는것으로 나와 있으나 잘 되네요..
썬에서 공식적으로 Link-Based Failure Detection 지원하는 network interface는 다음과 같습니다.
-> hme, eri, ce, ge, bge, qfe, dmfe
Link-Based Failure Detection 지원하지 않는 네트윅 카드는 될 수 있으면 쓰지 않는게 좋겠죠?
하지만 문제점은 링크가 끊기면 바로 failover 되버린다는것 단순히 네트윅 이중화만 되어 있다면
문제가 되지 않겠지만, Cluster가 깔린 서버라면 문제가 좀 커지겠죠? failover되는시간(DB,AP)만큼 서비스를
못하는 현상이 있을수 있습니다.
더 공부해봐야겠지만, DLPI IFF_RUNNING flag 부분을 더 삽질해서 공부하면 좋은 결과가 나올꺼라
생각됩니다 ^^
아마 기존 probe-based 방식처럼 failover 되는 시간을 수정하는 부분이 있을거라 생각됩니다. ^^
찾아보고 있으면 추가로 업뎃하겠습니다.
아래부터는 다음 카페에서 가져온 글 입니다. ^^
===========================================================================* 출처 : http://cafe.daum.net/osschool (조재구 강사님 카페, 야옹스님이 쓴 글)
기존 sol9이하버전까지는 IPMP 적용 시 failover가 실행되기 위해서는 최소 3개의 IP Address가 필요했지용...
Sol10부터는 link based Failure Detection 이라는 data link layer 점검층을 이용하여 1개의 IP Address만 사용하여도 fail over, fail back이 지원됩니다.
(기존은 방식은 probe based Failure Detection으로 application layer 점검층을 이용)
아래내용은 Sunsolve에서 발췌한 내용입니다...
Most of the informations are already documented in the IP Services/IPMP part of the Solaris System Administration Guide (816-4554). This document is a short summary of failure detection types with additional/typical/recommended configuration examples using Link-based failure detection only. Even though link-based failure detection was supported before Solaris 10 (since DLPI link up/down notifications are supported by used network driver), it is now possible to use this failure detection type without any probing (probe-based failure detection).
Contents:
1. Types of Failure Detection
1.1. Link-based Failure Detection
1.2. Probe-based Failure Detection
2. Configuration Examples using Link-based Failure Detection only
2.1. Single Interface
2.2. Multiple Interfaces
2.2.1. Active-Active
2.2.1.1. Two Interfaces
2.2.1.2. Two Interfaces + logical
2.2.1.3. Three Interfaces
2.2.2. Active-Standby
2.2.2.1. Two Interfaces
2.2.2.2. Two Interfaces + logical
3. References
1. Types of Failure Detection
1.1. Link-based Failure Detection
Link-based failure detection is always enabled (supposed to be supported by the interface), whether optional probe-based failure detection is used or not. As per PSARC/1999/225 network drivers do send asynchronous DLPI notifications DL_NOTE_LINK_DOWN (link/NIC is down) and DL_NOTE_LINK_UP (link/NIC is up). The UP and DOWN notifications are used in IP to set and clear the IFF_RUNNING flag which is, in the absence of such notifications, always set for an interface that is up. Failure detection software will immediately detect changes to IFF_RUNNING. These DLPI notifications were implemented to network drivers by and by, and supported by almost all of them since Solaris 10.
With link-based failure detection, only the link between local interface and the link partner is been checked on hardware layer. Neither IP layer nor any further network path will be monitored!
No test addresses are required for link-based failure detection.
For more informations, please refer to Solaris 10 System Administration Guide:
IP Services >> IPMP >> 30. Introducing IPMP (Overview) >> Link-Based Failure Detection
1.2. Probe-based Failure Detection
Probe-based failure detection is performed on each interface in the IPMP group that has a test address. Using this test address, ICMP probe messages go out over this interface to one or more target systems on the same IP link. The in.mpathd daemon determines which target systems to probe dynamically:
all default routes on same IP link are used as probe targets.
all host routes on same IP link are used as probe targets. ( Configuring Target Systems)
always neither default nor host routes are available, in.mpathd sends out a all hosts multicast to 224.0.0.1 in IPv4 and ff02::1 in IPv6 to find neighbor hosts on the link.
Note: Available probe targets are determined dynamically, so the daemon in.mpathd has not to be re-started.
The in.mpathd daemon probes all the targets separately through all the interfaces in the IPMP group. The probing rate depends on the failure detection time (FDT) specified in /etc/default/mpathd (default 10 seconds) with 5 probes each timeframe. If 5 consecutive probes fail, the in.mpathd considers the interface to have failed. The minimum repair detection time is twice the failure detection time, 20 seconds by default, because replies to 10 consecutive probes must be received.
Without any configured host routes, the default route is used as a single probe target in most cases. In this case the whole network path up to the gateway (router) is monitored on IP layer. With all interfaces in the IPMP group connected via redundant network paths (switches etc.), you get full redundancy. On the other hand the default router can be a single point of failure, resulting in 'All Interfaces in group have failed'. Even with default gateway down, it could make sense to not fail the whole IPMP group, and to allow traffic within the local network. In this case specific probe targets (hosts or active network components) can be configured via host routes. So it is question of network design, which network path you do want to monitor.
A test address is required on each interface in the IPMP group, but the test addresses can be in a different IP test subnet than the data address(es). So private network addresses as specified by rfc1918 (e.g. 10/8, 172.16/12, or 192.168/16) can be used as well.
For more informations, please refer to Solaris 10 System Administration Guide:
IP Services >> IPMP >> 30. Introducing IPMP (Overview) >> Probe-Based Failure Detection
2. Configuration Examples using Link-based Failure Detection
An IPMP configuration typically consists of two or more physical interfaces on the same system that are attached to the same IP link. These physical interfaces might or might not be on the same NIC. The interfaces are configured as members of the same IPMP group.
A single interface can be configured in its own IPMP group. The single interface IPMP group has the same behavior! as an IPMP group with multiple interfaces. However, failover and failback cannot occur for an IPMP group with only one interface.
The following message does tell you, that this is link-based failure detection only configuration. It is reported for each interface in the group.
/var/adm/messages
in.mpathd[144]: [ID 975029 daemon.error] No test address configured on interface ce0; disabling probe-based failure detection on it
So in this configuration it is not an error, but more a confirm!ation, that the probe-based failure detection has been disabled correctly.
2.1. Single Interface
/etc/hostname.ce0
192.168.10.10 netmask + broadcast + group ipmp0 up
# ifconfig -a
ce0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4
inet 192.168.10.10 netmask ffffff00 broadcast 192.168.10.255
groupname ipmp0
ether 0:3:ba:93:90:fc
2.2. Multiple Interfaces
2.2.1. Active-Active
2.2.1.1. Two Interfaces
/etc/hostname.ce0
192.168.10.10 netmask + broadcast + group ipmp0 up
/etc/hostname.ce1
group ipmp0 up
# ifconfig -a
ce0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4
inet 192.168.10.10 netmask ffffff00 broadcast 192.168.10.255
groupname ipmp0
ether 0:3:ba:93:90:fc
ce1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 5
inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
groupname ipmp0
ether 0:3:ba:93:91:35
2.2.
1.2. Two Interfaces + logical
/etc/hostname.ce0
192.168.10.10 netmask + broadcast + group ipmp0 up \
addif 192.168.10.11 netmask + broadcast + up
/etc/hostname.ce1
group ipmp0 up
# ifconfig -a
ce0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4
inet 192.168.10.10 netmask ffffff00 broadcast 192.168.10.255
groupname ipmp0
ether 0:3:ba:93:90:fc
ce0:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4
inet 192.168.10.11 netmask ffffff00 broadcast 192.168.10.255
ce1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 5
inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
groupname ipmp0
ether 0:3:ba:93:91:35
2.2.1.3. Three Interfaces
/etc/hostname.ce0
192.168.10.10 netmask + broadcast + group ipmp0 up
/etc/hostname.ce1
group ipmp0 up
/etc/hostname.bge1
group ipmp0 up
# ifconfig -a
bge1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
groupname ipmp0
ether 0:9:3d:11:91:1b
ce0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4
inet 192.168.10.10 netmask ffffff00 broadcast 192.168.10.255
groupname ipmp0
ether 0:3:ba:93:90:fc
ce1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 5
inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
groupname ipmp0
ether 0:3:ba:93:91:35
2.2.2. Active-Standby
2.2.2.1. Two Interfaces
/etc/hostname.ce0
192.168.10.10 netmask + broadcast + group ipmp0 up
/etc/hostname.ce1
group ipmp0 standby up
# ifconfig -a
ce0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4
inet 192.168.10.10 netmask ffffff00 broadcast 192.168.10.255
groupname ipmp0
ether 0:3:ba:93:90:fc
ce0:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4
inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
ce1: flags=69000842<BROADCAST,RUNNING,MULTICAST,IPv4,NOFAILOVER,STANDBY,INACTIVE> mtu 0 index 5
inet 0.0.0.0 netmask 0
groupname ipmp0
ether 0:3:ba:93:91:35
2.2.2.2. Two Interfaces + logical
/etc/hostname.ce0
192.168.10.10 netmask + broadcast + group ipmp0 up \
addif 192.168.10.11 netmask + broadcast + up
/etc/hostname.ce1
group ipmp0 standby up
# ifconfig -a
ce0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4
inet 192.168.10.10 netmask ffffff00 broadcast 192.168.10.255
groupname ipmp0
ether 0:3:ba:93:90:fc
ce0:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4
inet 192.168.10.11 netmask ffffff00 broadcast 192.168.10.255
ce0:2: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4
inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
ce1: flags=69000842<BROADCAST,RUNNING,MULTICAST,IPv4,NOFAILOVER,STANDBY,INACTIVE> mtu 0 index 5
inet 0.0.0.0 netmask 0
groupname ipmp0
ether 0:3:ba:93:91:35
*테스트 방법
# if_mpadm -d ce0
ce0 interface를 offline
ce1으로 fail over
# if_mpadm -r ce0
ce0 interface를 online
ce1에서 ce0로 fail back
실제 vmware상에서 테스트 해 본 결과 상당히 만족스런 결과를 얻었습니다. ^^
pcn interface는 공식적으로 ipmp를 지원하지 않는것으로 나와 있으나 잘 되네요..
썬에서 공식적으로 Link-Based Failure Detection 지원하는 network interface는 다음과 같습니다.
-> hme, eri, ce, ge, bge, qfe, dmfe
Link-Based Failure Detection 지원하지 않는 네트윅 카드는 될 수 있으면 쓰지 않는게 좋겠죠?
하지만 문제점은 링크가 끊기면 바로 failover 되버린다는것 단순히 네트윅 이중화만 되어 있다면
문제가 되지 않겠지만, Cluster가 깔린 서버라면 문제가 좀 커지겠죠? failover되는시간(DB,AP)만큼 서비스를
못하는 현상이 있을수 있습니다.
더 공부해봐야겠지만, DLPI IFF_RUNNING flag 부분을 더 삽질해서 공부하면 좋은 결과가 나올꺼라
생각됩니다 ^^
아마 기존 probe-based 방식처럼 failover 되는 시간을 수정하는 부분이 있을거라 생각됩니다. ^^
찾아보고 있으면 추가로 업뎃하겠습니다.
아래부터는 다음 카페에서 가져온 글 입니다. ^^
===========================================================================* 출처 : http://cafe.daum.net/osschool (조재구 강사님 카페, 야옹스님이 쓴 글)
기존 sol9이하버전까지는 IPMP 적용 시 failover가 실행되기 위해서는 최소 3개의 IP Address가 필요했지용...
Sol10부터는 link based Failure Detection 이라는 data link layer 점검층을 이용하여 1개의 IP Address만 사용하여도 fail over, fail back이 지원됩니다.
(기존은 방식은 probe based Failure Detection으로 application layer 점검층을 이용)
|
Most of the informations are already documented in the IP Services/IPMP part of the Solaris System Administration Guide (816-4554). This document is a short summary of failure detection types with additional/typical/recommended configuration examples using Link-based failure detection only. Even though link-based failure detection was supported before Solaris 10 (since DLPI link up/down notifications are supported by used network driver), it is now possible to use this failure detection type without any probing (probe-based failure detection).
Contents:
1. Types of Failure Detection
1.1. Link-based Failure Detection
1.2. Probe-based Failure Detection
2. Configuration Examples using Link-based Failure Detection only
2.1. Single Interface
2.2. Multiple Interfaces
2.2.1. Active-Active
2.2.1.1. Two Interfaces
2.2.1.2. Two Interfaces + logical
2.2.1.3. Three Interfaces
2.2.2. Active-Standby
2.2.2.1. Two Interfaces
2.2.2.2. Two Interfaces + logical
3. References
1. Types of Failure Detection
1.1. Link-based Failure Detection
Link-based failure detection is always enabled (supposed to be supported by the interface), whether optional probe-based failure detection is used or not. As per PSARC/1999/225 network drivers do send asynchronous DLPI notifications DL_NOTE_LINK_DOWN (link/NIC is down) and DL_NOTE_LINK_UP (link/NIC is up). The UP and DOWN notifications are used in IP to set and clear the IFF_RUNNING flag which is, in the absence of such notifications, always set for an interface that is up. Failure detection software will immediately detect changes to IFF_RUNNING. These DLPI notifications were implemented to network drivers by and by, and supported by almost all of them since Solaris 10.
With link-based failure detection, only the link between local interface and the link partner is been checked on hardware layer. Neither IP layer nor any further network path will be monitored!
No test addresses are required for link-based failure detection.
For more informations, please refer to Solaris 10 System Administration Guide:
IP Services >> IPMP >> 30. Introducing IPMP (Overview) >> Link-Based Failure Detection
1.2. Probe-based Failure Detection
Probe-based failure detection is performed on each interface in the IPMP group that has a test address. Using this test address, ICMP probe messages go out over this interface to one or more target systems on the same IP link. The in.mpathd daemon determines which target systems to probe dynamically:
all default routes on same IP link are used as probe targets.
all host routes on same IP link are used as probe targets. ( Configuring Target Systems)
always neither default nor host routes are available, in.mpathd sends out a all hosts multicast to 224.0.0.1 in IPv4 and ff02::1 in IPv6 to find neighbor hosts on the link.
Note: Available probe targets are determined dynamically, so the daemon in.mpathd has not to be re-started.
The in.mpathd daemon probes all the targets separately through all the interfaces in the IPMP group. The probing rate depends on the failure detection time (FDT) specified in /etc/default/mpathd (default 10 seconds) with 5 probes each timeframe. If 5 consecutive probes fail, the in.mpathd considers the interface to have failed. The minimum repair detection time is twice the failure detection time, 20 seconds by default, because replies to 10 consecutive probes must be received.
Without any configured host routes, the default route is used as a single probe target in most cases. In this case the whole network path up to the gateway (router) is monitored on IP layer. With all interfaces in the IPMP group connected via redundant network paths (switches etc.), you get full redundancy. On the other hand the default router can be a single point of failure, resulting in 'All Interfaces in group have failed'. Even with default gateway down, it could make sense to not fail the whole IPMP group, and to allow traffic within the local network. In this case specific probe targets (hosts or active network components) can be configured via host routes. So it is question of network design, which network path you do want to monitor.
A test address is required on each interface in the IPMP group, but the test addresses can be in a different IP test subnet than the data address(es). So private network addresses as specified by rfc1918 (e.g. 10/8, 172.16/12, or 192.168/16) can be used as well.
For more informations, please refer to Solaris 10 System Administration Guide:
IP Services >> IPMP >> 30. Introducing IPMP (Overview) >> Probe-Based Failure Detection
2. Configuration Examples using Link-based Failure Detection
An IPMP configuration typically consists of two or more physical interfaces on the same system that are attached to the same IP link. These physical interfaces might or might not be on the same NIC. The interfaces are configured as members of the same IPMP group.
A single interface can be configured in its own IPMP group. The single interface IPMP group has the same behavior! as an IPMP group with multiple interfaces. However, failover and failback cannot occur for an IPMP group with only one interface.
The following message does tell you, that this is link-based failure detection only configuration. It is reported for each interface in the group.
/var/adm/messages
in.mpathd[144]: [ID 975029 daemon.error] No test address configured on interface ce0; disabling probe-based failure detection on it
So in this configuration it is not an error, but more a confirm!ation, that the probe-based failure detection has been disabled correctly.
2.1. Single Interface
/etc/hostname.ce0
192.168.10.10 netmask + broadcast + group ipmp0 up
# ifconfig -a
ce0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4
inet 192.168.10.10 netmask ffffff00 broadcast 192.168.10.255
groupname ipmp0
ether 0:3:ba:93:90:fc
2.2. Multiple Interfaces
2.2.1. Active-Active
2.2.1.1. Two Interfaces
/etc/hostname.ce0
192.168.10.10 netmask + broadcast + group ipmp0 up
/etc/hostname.ce1
group ipmp0 up
# ifconfig -a
ce0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4
inet 192.168.10.10 netmask ffffff00 broadcast 192.168.10.255
groupname ipmp0
ether 0:3:ba:93:90:fc
ce1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 5
inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
groupname ipmp0
ether 0:3:ba:93:91:35
2.2.
1.2. Two Interfaces + logical
/etc/hostname.ce0
192.168.10.10 netmask + broadcast + group ipmp0 up \
addif 192.168.10.11 netmask + broadcast + up
/etc/hostname.ce1
group ipmp0 up
# ifconfig -a
ce0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4
inet 192.168.10.10 netmask ffffff00 broadcast 192.168.10.255
groupname ipmp0
ether 0:3:ba:93:90:fc
ce0:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4
inet 192.168.10.11 netmask ffffff00 broadcast 192.168.10.255
ce1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 5
inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
groupname ipmp0
ether 0:3:ba:93:91:35
2.2.1.3. Three Interfaces
/etc/hostname.ce0
192.168.10.10 netmask + broadcast + group ipmp0 up
/etc/hostname.ce1
group ipmp0 up
/etc/hostname.bge1
group ipmp0 up
# ifconfig -a
bge1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
groupname ipmp0
ether 0:9:3d:11:91:1b
ce0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4
inet 192.168.10.10 netmask ffffff00 broadcast 192.168.10.255
groupname ipmp0
ether 0:3:ba:93:90:fc
ce1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 5
inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
groupname ipmp0
ether 0:3:ba:93:91:35
2.2.2. Active-Standby
2.2.2.1. Two Interfaces
/etc/hostname.ce0
192.168.10.10 netmask + broadcast + group ipmp0 up
/etc/hostname.ce1
group ipmp0 standby up
# ifconfig -a
ce0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4
inet 192.168.10.10 netmask ffffff00 broadcast 192.168.10.255
groupname ipmp0
ether 0:3:ba:93:90:fc
ce0:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4
inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
ce1: flags=69000842<BROADCAST,RUNNING,MULTICAST,IPv4,NOFAILOVER,STANDBY,INACTIVE> mtu 0 index 5
inet 0.0.0.0 netmask 0
groupname ipmp0
ether 0:3:ba:93:91:35
2.2.2.2. Two Interfaces + logical
/etc/hostname.ce0
192.168.10.10 netmask + broadcast + group ipmp0 up \
addif 192.168.10.11 netmask + broadcast + up
/etc/hostname.ce1
group ipmp0 standby up
# ifconfig -a
ce0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4
inet 192.168.10.10 netmask ffffff00 broadcast 192.168.10.255
groupname ipmp0
ether 0:3:ba:93:90:fc
ce0:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4
inet 192.168.10.11 netmask ffffff00 broadcast 192.168.10.255
ce0:2: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4
inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
ce1: flags=69000842<BROADCAST,RUNNING,MULTICAST,IPv4,NOFAILOVER,STANDBY,INACTIVE> mtu 0 index 5
inet 0.0.0.0 netmask 0
groupname ipmp0
ether 0:3:ba:93:91:35
*테스트 방법
# if_mpadm -d ce0
ce0 interface를 offline
ce1으로 fail over
# if_mpadm -r ce0
ce0 interface를 online
ce1에서 ce0로 fail back
'9. 도서관 > __사. Network' 카테고리의 다른 글
무선 인터넷 프로토콜, WAP2 (0) | 2008.08.08 |
---|---|
한국형 MFC-R2 신호방식 (0) | 2008.08.08 |
컨택센터와 IP 텔레포니 (0) | 2008.07.29 |
차세대 통신망 '브레인'으로 주목받는 '소프트스위치' (0) | 2008.07.23 |
정보통신망 종류 pstn psdn lan 인터넷망 (0) | 2008.07.23 |
댓글