AzDevOps
bin
env-python3
Looking in indexes: https://mirrors.aliyun.com/pypi/simple/
Collecting paramiko>=3.5.1
  Using cached https://mirrors.aliyun.com/pypi/packages/15/f8/c7bd0ef12954a81a1d3cea60a13946bd9a49a0036a5927770c461eade7ae/paramiko-3.5.1-py3-none-any.whl (227 kB)
Requirement already satisfied: bcrypt>=3.2 in ./env-python3/lib/python3.8/site-packages (from paramiko>=3.5.1) (4.0.1)
Requirement already satisfied: cryptography>=3.3 in ./env-python3/lib/python3.8/site-packages (from paramiko>=3.5.1) (3.3.2)
Requirement already satisfied: pynacl>=1.5 in ./env-python3/lib/python3.8/site-packages (from paramiko>=3.5.1) (1.5.0)
Requirement already satisfied: six>=1.4.1 in ./env-python3/lib/python3.8/site-packages (from cryptography>=3.3->paramiko>=3.5.1) (1.16.0)
Requirement already satisfied: cffi>=1.12 in ./env-python3/lib/python3.8/site-packages (from cryptography>=3.3->paramiko>=3.5.1) (1.15.1)
Requirement already satisfied: pycparser in ./env-python3/lib/python3.8/site-packages (from cffi>=1.12->cryptography>=3.3->paramiko>=3.5.1) (2.21)
Installing collected packages: paramiko
  Attempting uninstall: paramiko
    Found existing installation: paramiko 2.7.1
    Uninstalling paramiko-2.7.1:
      Successfully uninstalled paramiko-2.7.1
Successfully installed paramiko-3.5.1
=== Running tests in groups ===
Running: python3 -m pytest srv6/test_srv6_basic_sanity.py --inventory ../ansible/veos_vtb --host-pattern vlab-c-01 --testbed vms-kvm-ciscovs-7nodes --testbed_file vtestbed.yaml --log-cli-level warning --log-file-level debug --kube_master unset --showlocals --assert plain --show-capture no -rav --allow_recover --ignore=ptftests --ignore=acstests --ignore=saitests --ignore=scripts --ignore=k8s --ignore=sai_qualify --junit-xml=logs/tr.xml --log-file=logs/test.log --skip_sanity --disable_loganalyzer --neighbor_type=sonic
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.3, pluggy-1.0.0
ansible: 2.9.27
rootdir: /data/sonic-mgmt/tests, configfile: pytest.ini
plugins: forked-1.6.0, allure-pytest-2.8.22, xdist-1.28.0, html-3.2.0, ansible-2.2.4, repeat-0.9.1, metadata-2.0.4, celery-4.4.7

----------------------------- live log collection ------------------------------
08:44:12 __init__.load_minigraph_facts            L0245 ERROR  | Failed to load minigraph basic facts, exception: CalledProcessError(2, ['ansible', '-m', 'minigraph_facts', '-i', '../ansible/veos_vtb', 'vlab-c-01', '-a', 'host=vlab-c-01'])
collected 9 items

srv6/test_srv6_basic_sanity.py::test_interface_on_each_node FAILED       [ 11%]
srv6/test_srv6_basic_sanity.py::test_check_bgp_neighbors FAILED          [ 22%]
srv6/test_srv6_basic_sanity.py::test_check_routes FAILED                 [ 33%]
srv6/test_srv6_basic_sanity.py::test_traffic_check_via_trex FAILED       [ 44%]
srv6/test_srv6_basic_sanity.py::test_traffic_check_via_ptf 
-------------------------------- live log call ---------------------------------
08:58:17 __init__.pytest_runtest_call             L0040 ERROR  | Traceback (most recent call last):
  File "/home/ubuntu/env-python3/lib/python3.8/site-packages/_pytest/python.py", line 1761, in runtest
    self.ihook.pytest_pyfunc_call(pyfuncitem=self)
  File "/home/ubuntu/env-python3/lib/python3.8/site-packages/pluggy/_hooks.py", line 265, in __call__
    return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)
  File "/home/ubuntu/env-python3/lib/python3.8/site-packages/pluggy/_manager.py", line 80, in _hookexec
    return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
  File "/home/ubuntu/env-python3/lib/python3.8/site-packages/pluggy/_callers.py", line 60, in _multicall
    return outcome.get_result()
  File "/home/ubuntu/env-python3/lib/python3.8/site-packages/pluggy/_result.py", line 60, in get_result
    raise ex[1].with_traceback(ex[2])
  File "/home/ubuntu/env-python3/lib/python3.8/site-packages/pluggy/_callers.py", line 39, in _multicall
    res = hook_impl.function(*args)
  File "/home/ubuntu/env-python3/lib/python3.8/site-packages/_pytest/python.py", line 192, in pytest_pyfunc_call
    result = testfunction(**testargs)
  File "/data/sonic-mgmt/tests/srv6/test_srv6_basic_sanity.py", line 282, in test_traffic_check_via_ptf
    raise Exception("Traffic test failed")
Exception: Traffic test failed

FAILED                                                                   [ 55%]
srv6/test_srv6_basic_sanity.py::test_traffic_check_local_link_fail_case FAILED [ 66%]
srv6/test_srv6_basic_sanity.py::test_traffic_check_remote_igp_fail_case 
-------------------------------- live log call ---------------------------------
09:00:07 __init__.pytest_runtest_call             L0040 ERROR  | Traceback (most recent call last):
  File "/home/ubuntu/env-python3/lib/python3.8/site-packages/_pytest/python.py", line 1761, in runtest
    self.ihook.pytest_pyfunc_call(pyfuncitem=self)
  File "/home/ubuntu/env-python3/lib/python3.8/site-packages/pluggy/_hooks.py", line 265, in __call__
    return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)
  File "/home/ubuntu/env-python3/lib/python3.8/site-packages/pluggy/_manager.py", line 80, in _hookexec
    return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
  File "/home/ubuntu/env-python3/lib/python3.8/site-packages/pluggy/_callers.py", line 60, in _multicall
    return outcome.get_result()
  File "/home/ubuntu/env-python3/lib/python3.8/site-packages/pluggy/_result.py", line 60, in get_result
    raise ex[1].with_traceback(ex[2])
  File "/home/ubuntu/env-python3/lib/python3.8/site-packages/pluggy/_callers.py", line 39, in _multicall
    res = hook_impl.function(*args)
  File "/home/ubuntu/env-python3/lib/python3.8/site-packages/_pytest/python.py", line 192, in pytest_pyfunc_call
    result = testfunction(**testargs)
  File "/data/sonic-mgmt/tests/srv6/test_srv6_basic_sanity.py", line 380, in test_traffic_check_remote_igp_fail_case
    p3.command(cmd)
  File "/data/sonic-mgmt/tests/common/devices/base.py", line 131, in _run
    raise RunAnsibleModuleFail("run module {} failed".format(self.module_name), res)
tests.common.errors.RunAnsibleModuleFail: run module command failed, Ansible Results =>
{"changed": true, "cmd": ["sudo", "ifconfig", "Ethernet0", "down"], "delta": "0:00:00.058124", "end": "2025-12-04 09:00:06.835547", "failed": true, "msg": "non-zero return code", "rc": 255, "start": "2025-12-04 09:00:06.777423", "stderr": "Ethernet0: ERROR while getting interface flags: No such device", "stderr_lines": ["Ethernet0: ERROR while getting interface flags: No such device"], "stdout": "", "stdout_lines": [], "warnings": ["Consider using 'become', 'become_method', and 'become_user' rather than running sudo"]}

FAILED                                                                   [ 77%]
srv6/test_srv6_basic_sanity.py::test_traffic_check_remote_bgp_fail_case 
-------------------------------- live log call ---------------------------------
09:00:22 __init__.pytest_runtest_call             L0040 ERROR  | Traceback (most recent call last):
  File "/home/ubuntu/env-python3/lib/python3.8/site-packages/_pytest/python.py", line 1761, in runtest
    self.ihook.pytest_pyfunc_call(pyfuncitem=self)
  File "/home/ubuntu/env-python3/lib/python3.8/site-packages/pluggy/_hooks.py", line 265, in __call__
    return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)
  File "/home/ubuntu/env-python3/lib/python3.8/site-packages/pluggy/_manager.py", line 80, in _hookexec
    return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
  File "/home/ubuntu/env-python3/lib/python3.8/site-packages/pluggy/_callers.py", line 60, in _multicall
    return outcome.get_result()
  File "/home/ubuntu/env-python3/lib/python3.8/site-packages/pluggy/_result.py", line 60, in get_result
    raise ex[1].with_traceback(ex[2])
  File "/home/ubuntu/env-python3/lib/python3.8/site-packages/pluggy/_callers.py", line 39, in _multicall
    res = hook_impl.function(*args)
  File "/home/ubuntu/env-python3/lib/python3.8/site-packages/_pytest/python.py", line 192, in pytest_pyfunc_call
    result = testfunction(**testargs)
  File "/data/sonic-mgmt/tests/srv6/test_srv6_basic_sanity.py", line 454, in test_traffic_check_remote_bgp_fail_case
    p3.command(cmd)
  File "/data/sonic-mgmt/tests/common/devices/base.py", line 131, in _run
    raise RunAnsibleModuleFail("run module {} failed".format(self.module_name), res)
tests.common.errors.RunAnsibleModuleFail: run module command failed, Ansible Results =>
{"changed": true, "cmd": ["sudo", "ifconfig", "Ethernet4", "down"], "delta": "0:00:00.048437", "end": "2025-12-04 09:00:21.867136", "failed": true, "msg": "non-zero return code", "rc": 255, "start": "2025-12-04 09:00:21.818699", "stderr": "Ethernet4: ERROR while getting interface flags: No such device", "stderr_lines": ["Ethernet4: ERROR while getting interface flags: No such device"], "stdout": "", "stdout_lines": [], "warnings": ["Consider using 'become', 'become_method', and 'become_user' rather than running sudo"]}

FAILED                                                                   [ 88%]
srv6/test_srv6_basic_sanity.py::test_sbfd_functions SKIPPED (This te...) [100%]

=================================== FAILURES ===================================
_________________________ test_interface_on_each_node __________________________

duthosts = [<MultiAsicSonicHost vlab-c-01>], rand_one_dut_hostname = 'vlab-c-01'
nbrhosts = {'P2': <SonicHost VM0104>, 'P3': <SonicHost VM0103>, 'P4': <SonicHost VM0105>, 'PE1': <SonicHost VM0100>, ...}

    def test_interface_on_each_node(duthosts, rand_one_dut_hostname, nbrhosts):
        for vm_name in test_vm_names:
            nbrhost = nbrhosts[vm_name]['host']
            num, hwsku = find_node_interfaces(nbrhost)
            logger.debug("Get {} interfaces on {}, hwsku {}".format(num, vm_name, hwsku))
            if hwsku == "cisco-8101-p4-32x100-vs":
>               pytest_assert(num == 32)
E               Failed: None

duthosts   = [<MultiAsicSonicHost vlab-c-01>]
hwsku      = 'cisco-8101-p4-32x100-vs'
nbrhost    = <SonicHost VM0100>
nbrhosts   = {'P2': <SonicHost VM0104>, 'P3': <SonicHost VM0103>, 'P4': <SonicHost VM0105>, 'PE1': <SonicHost VM0100>, ...}
num        = 0
rand_one_dut_hostname = 'vlab-c-01'
vm_name    = 'PE1'

srv6/test_srv6_basic_sanity.py:137: Failed
___________________________ test_check_bgp_neighbors ___________________________

duthosts = [<MultiAsicSonicHost vlab-c-01>], rand_one_dut_hostname = 'vlab-c-01'
nbrhosts = {'P2': <SonicHost VM0104>, 'P3': <SonicHost VM0103>, 'P4': <SonicHost VM0105>, 'PE1': <SonicHost VM0100>, ...}

    def test_check_bgp_neighbors(duthosts, rand_one_dut_hostname, nbrhosts):
        logger.info("Check BGP Neighbors")
        # From PE3
        nbrhost = nbrhosts["PE3"]['host']
>       pytest_assert(
            wait_until(
                60, 10, 0, check_bgp_neighbors_func, nbrhost,
                ['2064:100::1d', '2064:200::1e', 'fc06::2', 'fc08::2']
            ),
            "wait for PE3 BGP neighbors up"
        )
E       Failed: wait for PE3 BGP neighbors up

duthosts   = [<MultiAsicSonicHost vlab-c-01>]
nbrhost    = <SonicHost VM0102>
nbrhosts   = {'P2': <SonicHost VM0104>, 'P3': <SonicHost VM0103>, 'P4': <SonicHost VM0105>, 'PE1': <SonicHost VM0100>, ...}
rand_one_dut_hostname = 'vlab-c-01'

srv6/test_srv6_basic_sanity.py:153: Failed
______________________________ test_check_routes _______________________________

duthosts = [<MultiAsicSonicHost vlab-c-01>], rand_one_dut_hostname = 'vlab-c-01'
nbrhosts = {'P2': <SonicHost VM0104>, 'P3': <SonicHost VM0103>, 'P4': <SonicHost VM0105>, 'PE1': <SonicHost VM0100>, ...}

    def test_check_routes(duthosts, rand_one_dut_hostname, nbrhosts):
        global_route = ""
        is_v6 = True
    
        # From PE3
        nbrhost = nbrhosts["PE3"]['host']
        logger.info("Check learnt vpn routes")
        # check remote learnt VPN routes via two PE1 and PE2
        dut1_ips = []
        for x in range(1, num_ce_routes+1):
            ip = "{}.{}/32".format(route_prefix_for_pe1_and_pe2, x)
            dut1_ips.append(ip)
>       check_routes(nbrhost, dut1_ips, ["2064:100::1d", "2064:200::1e"], "Vrf1")

dut1_ips   = ['192.100.0.1/32', '192.100.0.2/32', '192.100.0.3/32', '192.100.0.4/32', '192.100.0.5/32', '192.100.0.6/32', ...]
duthosts   = [<MultiAsicSonicHost vlab-c-01>]
global_route = ''
ip         = '192.100.0.10/32'
is_v6      = True
nbrhost    = <SonicHost VM0102>
nbrhosts   = {'P2': <SonicHost VM0104>, 'P3': <SonicHost VM0103>, 'P4': <SonicHost VM0105>, 'PE1': <SonicHost VM0100>, ...}
rand_one_dut_hostname = 'vlab-c-01'
x          = 10

srv6/test_srv6_basic_sanity.py:198: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

nbrhost = <SonicHost VM0102>
ips = ['192.100.0.1/32', '192.100.0.2/32', '192.100.0.3/32', '192.100.0.4/32', '192.100.0.5/32', '192.100.0.6/32', ...]
nexthops = ['2064:100::1d', '2064:200::1e'], vrf = 'Vrf1', is_v6 = False

    def check_routes(nbrhost, ips, nexthops, vrf="", is_v6=False):
        # Add retry for debugging purpose
        count = 0
        ret = False
    
        #
        # Sleep 10 sec before retrying
        #
        sleep_duration_for_retry = 10
    
        # retry 3 times before claiming failure
        while count < 3 and ret == False:
            ret = check_routes_func(nbrhost, ips, nexthops, vrf, is_v6)
            if not ret:
                count = count + 1
                # sleep make sure all forwarding structures are settled down.
                time.sleep(sleep_duration_for_retry)
                logger.info("Sleep {} seconds to retry round {}".format(sleep_duration_for_retry, count))
    
>       pytest_assert(ret)
E       Failed: None

count      = 3
ips        = ['192.100.0.1/32', '192.100.0.2/32', '192.100.0.3/32', '192.100.0.4/32', '192.100.0.5/32', '192.100.0.6/32', ...]
is_v6      = False
nbrhost    = <SonicHost VM0102>
nexthops   = ['2064:100::1d', '2064:200::1e']
ret        = False
sleep_duration_for_retry = 10
vrf        = 'Vrf1'

srv6/srv6_utils.py:285: Failed
_________________________ test_traffic_check_via_trex __________________________

tbinfo = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}
duthosts = [<MultiAsicSonicHost vlab-c-01>], rand_one_dut_hostname = 'vlab-c-01'
ptfhost = <tests.common.devices.ptf.PTFHost object at 0x7efc52c484c0>
nbrhosts = {'P2': <SonicHost VM0104>, 'P3': <SonicHost VM0103>, 'P4': <SonicHost VM0105>, 'PE1': <SonicHost VM0100>, ...}
ptfadapter = <tests.common.plugins.ptfadapter.ptfadapter.PtfTestAdapter testMethod=runTest>

    def test_traffic_check_via_trex(tbinfo, duthosts, rand_one_dut_hostname, ptfhost, nbrhosts, ptfadapter):
        #
        # Create a packet sending to 192.100.0.1
        #
    
        #add trex tream check
        test_ipv4_dip = "192.100.0.1"
        reset_topo_pkt_counter(ptfadapter) #reset counters before each run
        result = trex_run(test_ipv4_dip, duration = 5) #run sync mode
        #result example {'ptf_tot_tx': 10000, 'ptf_tot_rx': 10000, 'P3_tx_to_PE2': 2500, 'P2_tx_to_PE1': 2500, 'P1_tx_to_PE2': 2500, 'P1_tx_to_PE2': 2500}
        expect_list = {"ptf_tot_rx": 5000, "ptf_tot_tx": 5000, "PE3_tx_to_P4": 2500, "PE3_tx_to_P2": 2500} #check pkt count on any link
        logger.info("test_traffic_check vrf ip:{} test result:{}, expect_list:{}".format(test_ipv4_dip, result, expect_list))
>       pytest_assert(thresh_check(result, expect_list))
E       Failed: None

duthosts   = [<MultiAsicSonicHost vlab-c-01>]
expect_list = {'PE3_tx_to_P2': 2500, 'PE3_tx_to_P4': 2500, 'ptf_tot_rx': 5000, 'ptf_tot_tx': 5000}
nbrhosts   = {'P2': <SonicHost VM0104>, 'P3': <SonicHost VM0103>, 'P4': <SonicHost VM0105>, 'PE1': <SonicHost VM0100>, ...}
ptfadapter = <tests.common.plugins.ptfadapter.ptfadapter.PtfTestAdapter testMethod=runTest>
ptfhost    = <tests.common.devices.ptf.PTFHost object at 0x7efc52c484c0>
rand_one_dut_hostname = 'vlab-c-01'
result     = {'P1_tx_to_PE1': 0, 'P1_tx_to_PE2': 0, 'P2_tx_to_P1': 0, 'P2_tx_to_P3': 0, ...}
tbinfo     = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}
test_ipv4_dip = '192.100.0.1'

srv6/test_srv6_basic_sanity.py:224: Failed
__________________________ test_traffic_check_via_ptf __________________________

tbinfo = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}
duthosts = [<MultiAsicSonicHost vlab-c-01>], rand_one_dut_hostname = 'vlab-c-01'
ptfhost = <tests.common.devices.ptf.PTFHost object at 0x7efc52c484c0>
nbrhosts = {'P2': <SonicHost VM0104>, 'P3': <SonicHost VM0103>, 'P4': <SonicHost VM0105>, 'PE1': <SonicHost VM0100>, ...}
ptfadapter = <tests.common.plugins.ptfadapter.ptfadapter.PtfTestAdapter testMethod=runTest>

    def test_traffic_check_via_ptf(tbinfo, duthosts, rand_one_dut_hostname, ptfhost, nbrhosts, ptfadapter):
        # establish_and_configure_bfd(nbrhosts)
        tcp_pkt0 = simple_tcp_packet(
            ip_src="192.200.0.1",
            ip_dst="192.100.0.1",
            tcp_sport=8888,
            tcp_dport=6666,
            ip_ttl=64
        )
        pkt = tcp_pkt0.copy()
        pkt['Ether'].dst = sender_mac
    
        exp_pkt = tcp_pkt0.copy()
        exp_pkt['IP'].ttl -= 4
        masked2recv = Mask(exp_pkt)
        masked2recv.set_do_not_care_packet(scapy.Ether, "dst")
        masked2recv.set_do_not_care_packet(scapy.Ether, "src")
    
        # Enable tcpdump for debugging purpose, file_loc is host file location
        intf_list = ["VM0102-t1", "VM0102-t3"]
        file_loc = "~/sonic-mgmt/tests/logs/"
        prefix = "test_traffic_check"
        enable_tcpdump(intf_list, file_loc, prefix, True, True)
    
        # Add retry for debugging purpose
        count = 0
        done = False
        while count < 10 and done == False:
            try:
                runSendReceive(pkt, ptf_port_for_backplane, masked2recv, [ptf_port_for_backplane], True, ptfadapter)
                logger.info("Done with traffic run")
                done = True
            except Exception as e:
                count = count + 1
                logger.info("Retry round {}".format(count))
                # sleep make sure all forwarding structures are settled down.
                sleep_duration_for_retry = 60
                time.sleep(sleep_duration_for_retry)
                logger.info("Sleep {} seconds to make sure all forwarding structures are settled down".format(sleep_duration_for_retry))
    
        # Disable tcpdump
        disable_tcpdump(True)
    
        logger.info("Done {} count {}".format(done, count))
        if not done:
>           raise Exception("Traffic test failed")
E           Exception: Traffic test failed

count      = 10
done       = False
duthosts   = [<MultiAsicSonicHost vlab-c-01>]
exp_pkt    = <Ether  dst=00:01:02:03:04:05 src=00:06:07:08:09:0a type=IPv4 |<IP  ihl=None tos=0x0 id=1 frag=0 ttl=60 proto=tcp src=...dst=192.100.0.1 |<TCP  sport=8888 dport=6666 flags=S |<Raw  load='test_srv6_basic_sanity test_srv6_basic_sanity ' |>>>>
file_loc   = '~/sonic-mgmt/tests/logs/'
intf_list  = ['VM0102-t1', 'VM0102-t3']
masked2recv = <ptf.mask.Mask object at 0x7efc503fa190>
nbrhosts   = {'P2': <SonicHost VM0104>, 'P3': <SonicHost VM0103>, 'P4': <SonicHost VM0105>, 'PE1': <SonicHost VM0100>, ...}
pkt        = <Ether  dst=52:54:00:df:1c:5e src=00:06:07:08:09:0a type=IPv4 |<IP  ihl=None tos=0x0 id=1 frag=0 ttl=64 proto=tcp src=...dst=192.100.0.1 |<TCP  sport=8888 dport=6666 flags=S |<Raw  load='test_srv6_basic_sanity test_srv6_basic_sanity ' |>>>>
prefix     = 'test_traffic_check'
ptfadapter = <tests.common.plugins.ptfadapter.ptfadapter.PtfTestAdapter testMethod=runTest>
ptfhost    = <tests.common.devices.ptf.PTFHost object at 0x7efc52c484c0>
rand_one_dut_hostname = 'vlab-c-01'
sleep_duration_for_retry = 60
tbinfo     = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}
tcp_pkt0   = <Ether  dst=00:01:02:03:04:05 src=00:06:07:08:09:0a type=IPv4 |<IP  ihl=None tos=0x0 id=1 frag=0 ttl=64 proto=tcp src=...x06\x07\x08\t\n\x0b\x0c\r\x0e\x0f\x10\x11\x12\x13\x14\x15\x16\x17\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f !"#$%&\'()*+,-' |>>>>

srv6/test_srv6_basic_sanity.py:282: Exception
___________________ test_traffic_check_local_link_fail_case ____________________

tbinfo = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}
duthosts = [<MultiAsicSonicHost vlab-c-01>], rand_one_dut_hostname = 'vlab-c-01'
ptfhost = <tests.common.devices.ptf.PTFHost object at 0x7efc52c484c0>
nbrhosts = {'P2': <SonicHost VM0104>, 'P3': <SonicHost VM0103>, 'P4': <SonicHost VM0105>, 'PE1': <SonicHost VM0100>, ...}
ptfadapter = <tests.common.plugins.ptfadapter.ptfadapter.PtfTestAdapter testMethod=runTest>

    def test_traffic_check_local_link_fail_case(tbinfo, duthosts, rand_one_dut_hostname, ptfhost, nbrhosts, ptfadapter):
        filename = "zebra_case_1_locallink_down.txt"
        docker_filename = "/tmp/{}".format(filename)
        vm = "PE3"
        pe3 = nbrhosts[vm]['host']
        p2 = nbrhosts["P2"]['host']
    
        logname = "zebra_case_1_locallink_down_running_log.txt"
        # Recording
        recording_fwding_chain(pe3, logname, "Before starting local link fail case")
        #
        # Turn on frr debug
        #
        turn_on_off_frr_debug(duthosts, rand_one_dut_hostname, nbrhosts, docker_filename, vm, True)
        #
        # shut down the link between PE3 and P2
        #
        cmd = "sudo ifconfig Ethernet4 down"
        pe3.command(cmd)
        cmd = "sudo ifconfig Ethernet12 down"
        p2.command(cmd)
        time.sleep(sleep_duration)
        # expect remaining BGP session are up on PE3
        ret1 = wait_until(
            bgp_neighbor_down_wait_time,
            10, 0, check_bgp_neighbors_func,
            pe3, ['2064:100::1d', '2064:200::1e', 'fc06::2'])
    
        # Recording
        recording_fwding_chain(pe3, logname, "After local link down")
    
        #
        # Recover local links
        #
        cmd = "sudo ifconfig Ethernet4 up"
        pe3.command(cmd)
        cmd = "sudo ifconfig Ethernet12 up"
        p2.command(cmd)
        time.sleep(sleep_duration)
    
        # Recording
        recording_fwding_chain(pe3, logname, "After the local link gets recovered")
    
        #
        # Turn off frr debug and collect debug log
        #
        turn_on_off_frr_debug(duthosts, rand_one_dut_hostname, nbrhosts, docker_filename, vm, False)
        collect_frr_debugfile(duthosts, rand_one_dut_hostname, nbrhosts, docker_filename, vm)
    
        # expect remaining BGP session are up on PE3
>       pytest_assert(ret1, "wait for PE3 BGP neighbors to settle down")
E       Failed: wait for PE3 BGP neighbors to settle down

cmd        = 'sudo ifconfig Ethernet12 up'
docker_filename = '/tmp/zebra_case_1_locallink_down.txt'
duthosts   = [<MultiAsicSonicHost vlab-c-01>]
filename   = 'zebra_case_1_locallink_down.txt'
logname    = 'zebra_case_1_locallink_down_running_log.txt'
nbrhosts   = {'P2': <SonicHost VM0104>, 'P3': <SonicHost VM0103>, 'P4': <SonicHost VM0105>, 'PE1': <SonicHost VM0100>, ...}
p2         = <SonicHost VM0104>
pe3        = <SonicHost VM0102>
ptfadapter = <tests.common.plugins.ptfadapter.ptfadapter.PtfTestAdapter testMethod=runTest>
ptfhost    = <tests.common.devices.ptf.PTFHost object at 0x7efc52c484c0>
rand_one_dut_hostname = 'vlab-c-01'
ret1       = False
tbinfo     = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}
vm         = 'PE3'

srv6/test_srv6_basic_sanity.py:338: Failed
___________________ test_traffic_check_remote_igp_fail_case ____________________

tbinfo = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}
duthosts = [<MultiAsicSonicHost vlab-c-01>], rand_one_dut_hostname = 'vlab-c-01'
ptfhost = <tests.common.devices.ptf.PTFHost object at 0x7efc52c484c0>
nbrhosts = {'P2': <SonicHost VM0104>, 'P3': <SonicHost VM0103>, 'P4': <SonicHost VM0105>, 'PE1': <SonicHost VM0100>, ...}
ptfadapter = <tests.common.plugins.ptfadapter.ptfadapter.PtfTestAdapter testMethod=runTest>

    def test_traffic_check_remote_igp_fail_case(tbinfo, duthosts, rand_one_dut_hostname, ptfhost, nbrhosts, ptfadapter):
        filename = "zebra_case_2_remotelink_down.txt"
        docker_filename = "/tmp/{}".format(filename)
        vm = "PE3"
        pe3 = nbrhosts[vm]['host']
    
        logname = "zebra_case_2_remotelink_down_running_log.txt"
        # Recording
        recording_fwding_chain(pe3, logname, "Before starting remote link fail case")
        #
        # Turn on frr debug
        #
        turn_on_off_frr_debug(duthosts, rand_one_dut_hostname, nbrhosts, docker_filename, vm, True)
        #
        # shut down the link between P3 and P1, P2, P4
        #
        p1 = duthosts[rand_one_dut_hostname]
        p2 = nbrhosts["P2"]['host']
        p3 = nbrhosts["P3"]['host']
        p4 = nbrhosts["P4"]['host']
    
        cmd = "sudo ifconfig Ethernet124 down"
        p1.command(cmd)
        cmd = "sudo ifconfig Ethernet4 down"
        p2.command(cmd)
        cmd = "sudo ifconfig Ethernet4 down"
        p4.command(cmd)
    
        cmd = "sudo ifconfig Ethernet0 down"
>       p3.command(cmd)

cmd        = 'sudo ifconfig Ethernet0 down'
docker_filename = '/tmp/zebra_case_2_remotelink_down.txt'
duthosts   = [<MultiAsicSonicHost vlab-c-01>]
filename   = 'zebra_case_2_remotelink_down.txt'
logname    = 'zebra_case_2_remotelink_down_running_log.txt'
nbrhosts   = {'P2': <SonicHost VM0104>, 'P3': <SonicHost VM0103>, 'P4': <SonicHost VM0105>, 'PE1': <SonicHost VM0100>, ...}
p1         = <MultiAsicSonicHost vlab-c-01>
p2         = <SonicHost VM0104>
p3         = <SonicHost VM0103>
p4         = <SonicHost VM0105>
pe3        = <SonicHost VM0102>
ptfadapter = <tests.common.plugins.ptfadapter.ptfadapter.PtfTestAdapter testMethod=runTest>
ptfhost    = <tests.common.devices.ptf.PTFHost object at 0x7efc52c484c0>
rand_one_dut_hostname = 'vlab-c-01'
tbinfo     = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}
vm         = 'PE3'

srv6/test_srv6_basic_sanity.py:380: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <SonicHost VM0103>, module_args = ['sudo ifconfig Ethernet0 down']
complex_args = {}
previous_frame = <frame at 0x3238bd0, file '/data/sonic-mgmt/tests/srv6/test_srv6_basic_sanity.py', line 380, code test_traffic_check_remote_igp_fail_case>
filename = '/data/sonic-mgmt/tests/srv6/test_srv6_basic_sanity.py'
line_number = 380, function_name = 'test_traffic_check_remote_igp_fail_case'
lines = ['    p3.command(cmd)\n'], index = 0, verbose = True
module_ignore_errors = False, module_async = False

    def _run(self, *module_args, **complex_args):
    
        previous_frame = inspect.currentframe().f_back
        filename, line_number, function_name, lines, index = inspect.getframeinfo(previous_frame)
    
        verbose = complex_args.pop('verbose', True)
    
        if verbose:
            logger.debug(
                "{}::{}#{}: [{}] AnsibleModule::{}, args={}, kwargs={}".format(
                    filename,
                    function_name,
                    line_number,
                    self.hostname,
                    self.module_name,
                    json.dumps(module_args, cls=AnsibleHostBase.CustomEncoder),
                    json.dumps(complex_args, cls=AnsibleHostBase.CustomEncoder)
                )
            )
        else:
            logger.debug(
                "{}::{}#{}: [{}] AnsibleModule::{} executing...".format(
                    filename,
                    function_name,
                    line_number,
                    self.hostname,
                    self.module_name
                )
            )
    
        module_ignore_errors = complex_args.pop('module_ignore_errors', False)
        module_async = complex_args.pop('module_async', False)
    
        if module_async:
            def run_module(module_args, complex_args):
                return self.module(*module_args, **complex_args)[self.hostname]
            pool = ThreadPool()
            result = pool.apply_async(run_module, (module_args, complex_args))
            return pool, result
    
        module_args = json.loads(json.dumps(module_args, cls=AnsibleHostBase.CustomEncoder))
        complex_args = json.loads(json.dumps(complex_args, cls=AnsibleHostBase.CustomEncoder))
        res = self.module(*module_args, **complex_args)[self.hostname]
    
        if verbose:
            logger.debug(
                "{}::{}#{}: [{}] AnsibleModule::{} Result => {}".format(
                    filename,
                    function_name,
                    line_number,
                    self.hostname,
                    self.module_name, json.dumps(res, cls=AnsibleHostBase.CustomEncoder)
                )
            )
        else:
            logger.debug(
                "{}::{}#{}: [{}] AnsibleModule::{} done, is_failed={}, rc={}".format(
                    filename,
                    function_name,
                    line_number,
                    self.hostname,
                    self.module_name,
                    res.is_failed,
                    res.get('rc', None)
                )
            )
    
        if (res.is_failed or 'exception' in res) and not module_ignore_errors:
>           raise RunAnsibleModuleFail("run module {} failed".format(self.module_name), res)
E           tests.common.errors.RunAnsibleModuleFail: run module command failed, Ansible Results =>
E           {"changed": true, "cmd": ["sudo", "ifconfig", "Ethernet0", "down"], "delta": "0:00:00.058124", "end": "2025-12-04 09:00:06.835547", "failed": true, "msg": "non-zero return code", "rc": 255, "start": "2025-12-04 09:00:06.777423", "stderr": "Ethernet0: ERROR while getting interface flags: No such device", "stderr_lines": ["Ethernet0: ERROR while getting interface flags: No such device"], "stdout": "", "stdout_lines": [], "warnings": ["Consider using 'become', 'become_method', and 'become_user' rather than running sudo"]}

complex_args = {}
filename   = '/data/sonic-mgmt/tests/srv6/test_srv6_basic_sanity.py'
function_name = 'test_traffic_check_remote_igp_fail_case'
index      = 0
line_number = 380
lines      = ['    p3.command(cmd)\n']
module_args = ['sudo ifconfig Ethernet0 down']
module_async = False
module_ignore_errors = False
previous_frame = <frame at 0x3238bd0, file '/data/sonic-mgmt/tests/srv6/test_srv6_basic_sanity.py', line 380, code test_traffic_check_remote_igp_fail_case>
res        = {'failed': True, 'msg': 'non-zero return code', 'cmd': ['sudo', 'ifconfig', 'Ethernet0', 'down'], 'stdout': '', 'stder...nes': [], 'stderr_lines': ['Ethernet0: ERROR while getting interface flags: No such device'], '_ansible_no_log': False}
self       = <SonicHost VM0103>
verbose    = True

common/devices/base.py:131: RunAnsibleModuleFail
___________________ test_traffic_check_remote_bgp_fail_case ____________________

tbinfo = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}
duthosts = [<MultiAsicSonicHost vlab-c-01>], rand_one_dut_hostname = 'vlab-c-01'
ptfhost = <tests.common.devices.ptf.PTFHost object at 0x7efc52c484c0>
nbrhosts = {'P2': <SonicHost VM0104>, 'P3': <SonicHost VM0103>, 'P4': <SonicHost VM0105>, 'PE1': <SonicHost VM0100>, ...}
ptfadapter = <tests.common.plugins.ptfadapter.ptfadapter.PtfTestAdapter testMethod=runTest>

    def test_traffic_check_remote_bgp_fail_case(tbinfo, duthosts, rand_one_dut_hostname, ptfhost, nbrhosts, ptfadapter):
        filename = "zebra_case_3_remote_peer_down.txt"
        docker_filename = "/tmp/{}".format(filename)
        vm = "PE3"
        pe3 = nbrhosts[vm]['host']
    
        logname = "zebra_case_3_remote_peer_down_running_log.txt"
        # Recording
        recording_fwding_chain(pe3, logname, "Before starting remote PE failure case")
        #
        # Turn on frr debug
        #
        turn_on_off_frr_debug(duthosts, rand_one_dut_hostname, nbrhosts, docker_filename, vm, True)
        #
        # shut down the link between PE1 and P1, P3
        #
        p1 = duthosts[rand_one_dut_hostname]
        pe1 = nbrhosts["PE1"]['host']
        p3 = nbrhosts["P3"]['host']
    
        cmd = "sudo ifconfig Ethernet112 down"
        p1.command(cmd)
        cmd = "sudo ifconfig Ethernet4 down"
>       p3.command(cmd)

cmd        = 'sudo ifconfig Ethernet4 down'
docker_filename = '/tmp/zebra_case_3_remote_peer_down.txt'
duthosts   = [<MultiAsicSonicHost vlab-c-01>]
filename   = 'zebra_case_3_remote_peer_down.txt'
logname    = 'zebra_case_3_remote_peer_down_running_log.txt'
nbrhosts   = {'P2': <SonicHost VM0104>, 'P3': <SonicHost VM0103>, 'P4': <SonicHost VM0105>, 'PE1': <SonicHost VM0100>, ...}
p1         = <MultiAsicSonicHost vlab-c-01>
p3         = <SonicHost VM0103>
pe1        = <SonicHost VM0100>
pe3        = <SonicHost VM0102>
ptfadapter = <tests.common.plugins.ptfadapter.ptfadapter.PtfTestAdapter testMethod=runTest>
ptfhost    = <tests.common.devices.ptf.PTFHost object at 0x7efc52c484c0>
rand_one_dut_hostname = 'vlab-c-01'
tbinfo     = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}
vm         = 'PE3'

srv6/test_srv6_basic_sanity.py:454: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <SonicHost VM0103>, module_args = ['sudo ifconfig Ethernet4 down']
complex_args = {}
previous_frame = <frame at 0x328bf70, file '/data/sonic-mgmt/tests/srv6/test_srv6_basic_sanity.py', line 454, code test_traffic_check_remote_bgp_fail_case>
filename = '/data/sonic-mgmt/tests/srv6/test_srv6_basic_sanity.py'
line_number = 454, function_name = 'test_traffic_check_remote_bgp_fail_case'
lines = ['    p3.command(cmd)\n'], index = 0, verbose = True
module_ignore_errors = False, module_async = False

    def _run(self, *module_args, **complex_args):
    
        previous_frame = inspect.currentframe().f_back
        filename, line_number, function_name, lines, index = inspect.getframeinfo(previous_frame)
    
        verbose = complex_args.pop('verbose', True)
    
        if verbose:
            logger.debug(
                "{}::{}#{}: [{}] AnsibleModule::{}, args={}, kwargs={}".format(
                    filename,
                    function_name,
                    line_number,
                    self.hostname,
                    self.module_name,
                    json.dumps(module_args, cls=AnsibleHostBase.CustomEncoder),
                    json.dumps(complex_args, cls=AnsibleHostBase.CustomEncoder)
                )
            )
        else:
            logger.debug(
                "{}::{}#{}: [{}] AnsibleModule::{} executing...".format(
                    filename,
                    function_name,
                    line_number,
                    self.hostname,
                    self.module_name
                )
            )
    
        module_ignore_errors = complex_args.pop('module_ignore_errors', False)
        module_async = complex_args.pop('module_async', False)
    
        if module_async:
            def run_module(module_args, complex_args):
                return self.module(*module_args, **complex_args)[self.hostname]
            pool = ThreadPool()
            result = pool.apply_async(run_module, (module_args, complex_args))
            return pool, result
    
        module_args = json.loads(json.dumps(module_args, cls=AnsibleHostBase.CustomEncoder))
        complex_args = json.loads(json.dumps(complex_args, cls=AnsibleHostBase.CustomEncoder))
        res = self.module(*module_args, **complex_args)[self.hostname]
    
        if verbose:
            logger.debug(
                "{}::{}#{}: [{}] AnsibleModule::{} Result => {}".format(
                    filename,
                    function_name,
                    line_number,
                    self.hostname,
                    self.module_name, json.dumps(res, cls=AnsibleHostBase.CustomEncoder)
                )
            )
        else:
            logger.debug(
                "{}::{}#{}: [{}] AnsibleModule::{} done, is_failed={}, rc={}".format(
                    filename,
                    function_name,
                    line_number,
                    self.hostname,
                    self.module_name,
                    res.is_failed,
                    res.get('rc', None)
                )
            )
    
        if (res.is_failed or 'exception' in res) and not module_ignore_errors:
>           raise RunAnsibleModuleFail("run module {} failed".format(self.module_name), res)
E           tests.common.errors.RunAnsibleModuleFail: run module command failed, Ansible Results =>
E           {"changed": true, "cmd": ["sudo", "ifconfig", "Ethernet4", "down"], "delta": "0:00:00.048437", "end": "2025-12-04 09:00:21.867136", "failed": true, "msg": "non-zero return code", "rc": 255, "start": "2025-12-04 09:00:21.818699", "stderr": "Ethernet4: ERROR while getting interface flags: No such device", "stderr_lines": ["Ethernet4: ERROR while getting interface flags: No such device"], "stdout": "", "stdout_lines": [], "warnings": ["Consider using 'become', 'become_method', and 'become_user' rather than running sudo"]}

complex_args = {}
filename   = '/data/sonic-mgmt/tests/srv6/test_srv6_basic_sanity.py'
function_name = 'test_traffic_check_remote_bgp_fail_case'
index      = 0
line_number = 454
lines      = ['    p3.command(cmd)\n']
module_args = ['sudo ifconfig Ethernet4 down']
module_async = False
module_ignore_errors = False
previous_frame = <frame at 0x328bf70, file '/data/sonic-mgmt/tests/srv6/test_srv6_basic_sanity.py', line 454, code test_traffic_check_remote_bgp_fail_case>
res        = {'failed': True, 'msg': 'non-zero return code', 'cmd': ['sudo', 'ifconfig', 'Ethernet4', 'down'], 'stdout': '', 'stder...nes': [], 'stderr_lines': ['Ethernet4: ERROR while getting interface flags: No such device'], '_ansible_no_log': False}
self       = <SonicHost VM0103>
verbose    = True

common/devices/base.py:131: RunAnsibleModuleFail
=============================== warnings summary ===============================
common/plugins/loganalyzer/system_msg_handler.py:1
  /data/sonic-mgmt/tests/common/plugins/loganalyzer/system_msg_handler.py:1: DeprecationWarning: invalid escape sequence \ 
    '''

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
------------ generated xml file: /data/sonic-mgmt/tests/logs/tr.xml ------------
=========================== short test summary info ============================
SKIPPED [1] srv6/test_srv6_basic_sanity.py:500: This test is temporarily disabled due to configuration changes.
FAILED srv6/test_srv6_basic_sanity.py::test_interface_on_each_node - Failed: ...
FAILED srv6/test_srv6_basic_sanity.py::test_check_bgp_neighbors - Failed: wai...
FAILED srv6/test_srv6_basic_sanity.py::test_check_routes - Failed: None
FAILED srv6/test_srv6_basic_sanity.py::test_traffic_check_via_trex - Failed: ...
FAILED srv6/test_srv6_basic_sanity.py::test_traffic_check_via_ptf - Exception...
FAILED srv6/test_srv6_basic_sanity.py::test_traffic_check_local_link_fail_case
FAILED srv6/test_srv6_basic_sanity.py::test_traffic_check_remote_igp_fail_case
FAILED srv6/test_srv6_basic_sanity.py::test_traffic_check_remote_bgp_fail_case
============= 8 failed, 1 skipped, 1 warning in 975.49s (0:16:15) ==============
2025-12-04 16:43:54.587793 : Get input param file /root/workspace/PhoenixWingDailySRv6Test/jenkins_647/input_param_json.txt
2025-12-04 16:43:54.588115 : Get file lock for {'user_info': 'PhoenixWing_Daily_SRv6_Test_647'}
2025-12-04 16:43:55.589415 : Found index 0 for action read, user_info PhoenixWing_Daily_SRv6_Test_647
2025-12-04 16:43:55.589465 : Release file lock for {'user_info': 'PhoenixWing_Daily_SRv6_Test_647', 'action': 'read', 'output_vm': {'index': 3, 'user_info': 'PhoenixWing_Daily_SRv6_Test_647'}, 'output_index': 0, 'output_prefix': '192.168.0'}
2025-12-04 16:43:55.589523 : read_vm_reservation : {"user_info": "PhoenixWing_Daily_SRv6_Test_647", "action": "read", "output_vm": {"index": 3, "user_info": "PhoenixWing_Daily_SRv6_Test_647"}, "output_index": 0, "output_prefix": "192.168.0"}
2025-12-04 16:43:55.589710 : ifconfig | grep 30.57.186.111
2025-12-04 16:43:55.592278 : ifconfig | grep 30.57.186.42
2025-12-04 16:43:55.594348 : ifconfig | grep 30.57.186.79
2025-12-04 16:43:55.596792 : ifconfig | grep 30.57.186.80
2025-12-04 16:43:55.598887 : ifconfig | grep 30.57.186.218
2025-12-04 16:43:55.600921 : ifconfig | grep 30.57.186.175
2025-12-04 16:43:55.602993 : ifconfig | grep 11.166.8.106
2025-12-04 16:43:55.605035 : ifconfig | grep 11.166.8.104
2025-12-04 16:43:55.607167 : ifconfig | grep 11.165.122.19
2025-12-04 16:43:55.609294 : ifconfig | grep 11.166.1.213
2025-12-04 16:43:55.611316 : ifconfig | grep 11.165.121.210
2025-12-04 16:43:55.613357 : ifconfig | grep 11.165.120.75
2025-12-04 16:43:55.615425 : ifconfig | grep 11.165.121.106
2025-12-04 16:43:55.617563 : DEBUG_ARR:         inet 11.165.121.106  netmask 255.255.252.0  broadcast 11.165.123.255
2025-12-04 16:43:55.617594 : Found local server setting forr 11.165.121.106
2025-12-04 16:43:55.617606 : Set local ip as 192.168.0.3
{   'address': '11.165.121.106',
    'host_port': 'eth0',
    'jenkin_node_name': 'Pytest_ECS_165_106',
    'password': 'Alin00000s!',
    'user': 'root',
    'vm_bridge': 'vmbr0',
    'vm_gw': '192.168.0.1',
    'vmip': '192.168.0.2'}
2025-12-04 16:43:55.617982 : mkdir -p /tmp/local_cache//1764837834.587788/
Run pytest on 11.165.121.106 vmip 192.168.0.3, vm name _192.168.0.3
Get input topo vms-kvm-ciscovs-7nodes
Get input test case  -c "srv6/test_srv6_basic_sanity.py" 
2025-12-04 16:43:55.620079 : ping 192.168.0.3 -c 2
2025-12-04 16:43:56.624542 : DEBUG_ARR: PING 192.168.0.3 (192.168.0.3) 56(84) bytes of data.
2025-12-04 16:43:56.624587 : DEBUG_ARR: 64 bytes from 192.168.0.3: icmp_seq=1 ttl=64 time=0.194 ms
2025-12-04 16:43:56.624591 : DEBUG_ARR: 64 bytes from 192.168.0.3: icmp_seq=2 ttl=64 time=0.156 ms
2025-12-04 16:43:56.624597 : DEBUG_ARR: 
2025-12-04 16:43:56.624601 : DEBUG_ARR: --- 192.168.0.3 ping statistics ---
2025-12-04 16:43:56.624604 : DEBUG_ARR: 2 packets transmitted, 2 received, 0% packet loss, time 1001ms
2025-12-04 16:43:56.624608 : DEBUG_ARR: rtt min/avg/max/mdev = 0.156/0.175/0.194/0.019 ms
2025-12-04 16:43:56.624626 : ping 192.168.0.3 -c 2
2025-12-04 16:43:57.648586 : DEBUG_ARR: PING 192.168.0.3 (192.168.0.3) 56(84) bytes of data.
2025-12-04 16:43:57.648628 : DEBUG_ARR: 64 bytes from 192.168.0.3: icmp_seq=1 ttl=64 time=0.124 ms
2025-12-04 16:43:57.648633 : DEBUG_ARR: 64 bytes from 192.168.0.3: icmp_seq=2 ttl=64 time=0.269 ms
2025-12-04 16:43:57.648636 : DEBUG_ARR: 
2025-12-04 16:43:57.648640 : DEBUG_ARR: --- 192.168.0.3 ping statistics ---
2025-12-04 16:43:57.648642 : DEBUG_ARR: 2 packets transmitted, 2 received, 0% packet loss, time 1021ms
2025-12-04 16:43:57.648645 : DEBUG_ARR: rtt min/avg/max/mdev = 0.124/0.196/0.269/0.072 ms
2025-12-04 16:43:57.648677 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "docker exec --user ubuntu sonic-mgmt-test bash -c 'ls'"
2025-12-04 16:43:58.532164 : Run sudo monit unmonitor container_checker for range(0, 1)
2025-12-04 16:43:58.532205 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.125 sudo monit unmonitor container_checker'"
2025-12-04 16:43:59.564052 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "sudo setcap cap_net_raw,cap_net_admin=eip /usr/sbin/tcpdump"
2025-12-04 16:44:00.219076 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "sudo chmod 777 /var/run/openvswitch/*"
2025-12-04 16:44:00.879494 : sshpass -p "123" scp   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" /root/workspace/PhoenixWingDailySRv6Test/jenkins_647/input_param_json.txt ubuntu@192.168.0.3:~/
2025-12-04 16:44:01.583373 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "docker exec --user ubuntu sonic-mgmt-test bash -c 'python3 -m venv ~/env-python3 ; source ~/env-python3/bin/activate;  pip install -i https://mirrors.aliyun.com/pypi/simple/  --upgrade \"paramiko>=3.5.1\";  cd /data/sonic-mgmt/tests; ./run_tests.sh -n vms-kvm-ciscovs-7nodes -d vlab-c-01  -c "srv6/test_srv6_basic_sanity.py"  -f vtestbed.yaml -i ../ansible/veos_vtb  -u  -e --skip_sanity -e --disable_loganalyzer -e --neighbor_type=sonic '"
2025-12-04 17:00:23.954320 : Run sudo ls -l  /etc/sonic/frr/* for range(0, 1)
2025-12-04 17:00:23.954367 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.125 sudo ls -l  /etc/sonic/frr/*'"
2025-12-04 17:00:26.051637 : Current directory /root/workspace/PhoenixWingDailySRv6Test/na_lab/pytest_env
{'conf-name': 'vms-kvm-t0', 'group-name': 'vms6-1', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-64', 'group-name': 'vms6-1', 'topo': 't0-64', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-64-32', 'group-name': 'vms6-1', 'topo': 't0-64-32', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t1-lag', 'group-name': 'vms6-2', 'topo': 't1-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-02', 'ptf_ip': '10.250.0.106/24', 'ptf_ipv6': 'fec0::ffff:afa:6/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-03'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-2', 'group-name': 'vms6-3', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-03', 'ptf_ip': '10.250.0.108/24', 'ptf_ipv6': 'fec0::ffff:afa:8/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-04'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dual-t0', 'group-name': 'vms6-4', 'topo': 'dualtor', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-04', 'ptf_ip': '10.250.0.109/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'server': 'server_1', 'vm_base': 'VM0108', 'dut': ['vlab-05', 'vlab-06'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Dual-TOR testbed'}
 09:00:27 up 35 min,  0 user,  load average: 24.44, 24.25, 21.26
{'conf-name': 'vms-kvm-multi-asic-t1-lag', 'group-name': 'vms6-4', 'topo': 't1-64-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-05', 'ptf_ip': '10.250.0.110/24', 'ptf_ipv6': 'fec0::ffff:afa:a/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-07'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests multi-asic virtual switch vm'}
{'conf-name': 'vms-kvm-four-asic-t1-lag', 'group-name': 'vms6-4', 'topo': 't1-8-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-05', 'ptf_ip': '10.250.0.110/24', 'ptf_ipv6': 'fec0::ffff:afa:a/64', 'server': 'server_1', 'vm_base': 'VM0128', 'dut': ['vlab-08'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests multi-asic virtual switch vm'}
{'conf-name': 'vms-kvm-t2', 'group-name': 'vms6-4', 'topo': 't2-vs', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-04', 'ptf_ip': '10.250.0.109/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-t2-01', 'vlab-t2-02', 'vlab-t2-sup'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'T2 Virtual chassis'}
{'conf-name': 'vms-kvm-t0-3', 'group-name': 'vms6-6', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-06', 'ptf_ip': '10.250.0.116/24', 'ptf_ipv6': 'fec0::ffff:afb:2/64', 'server': 'server_1', 'vm_base': 'VM0132', 'dut': ['vlab-09'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-4', 'group-name': 'vms6-7', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-07', 'ptf_ip': '10.250.0.118/24', 'ptf_ipv6': 'fec0::ffff:afb:4/64', 'server': 'server_1', 'vm_base': 'VM0136', 'dut': ['vlab-10'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dual-mixed', 'group-name': 'vms6-8', 'topo': 'dualtor-mixed', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-08', 'ptf_ip': '10.250.0.119/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'netns_mgmt_ip': '10.250.0.126/24', 'server': 'server_1', 'vm_base': 'VM0140', 'dut': ['vlab-11', 'vlab-12'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Dual-TOR-Mixed testbed'}
{'conf-name': '8000e-t0', 'group-name': 'vms8-1', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-8k-01', 'ptf_ip': '10.250.0.202/24', 'ptf_ipv6': 'fec0::ffff:afc:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-8k-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests 8000e sonic device'}
{'conf-name': '8000e-t1', 'group-name': 'vms8-1', 'topo': 't1', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-8k-01', 'ptf_ip': '10.250.0.202/24', 'ptf_ipv6': 'fec0::ffff:afc:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-8k-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests 8000e sonic device'}
{'conf-name': 'vms-kvm-wan-pub', 'group-name': 'vms6-1', 'topo': 'wan-pub', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-4link', 'group-name': 'vms6-1', 'topo': 'wan-4link', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-pub-cisco', 'group-name': 'vms6-1', 'topo': 'wan-pub-cisco', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-2dut', 'group-name': 'vms6-1', 'topo': 'wan-2dut', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01', 'vlab-03'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-3link-tg', 'group-name': 'vms6-1', 'topo': 'wan-3link-tg', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-ecmp', 'group-name': 'vms6-1', 'topo': 'wan-ecmp', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01', 'vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-pub-isis', 'group-name': 'vms6-1', 'topo': 'wan-pub-isis', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dpu', 'group-name': 'vms6-1', 'topo': 'dpu', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': False, 'comment': 'Tests virtual switch vm as DPU'}
{'conf-name': 'vms-kvm-ciscovs-7nodes', 'group-name': 'vms9-1', 'topo': 'ciscovs-7nodes', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-c-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes'}
{'conf-name': 'vms-kvm-ciscovs-5nodes', 'group-name': 'vms9-1', 'topo': 'ciscovs-5nodes', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-c-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 5 nodes'}
2025-12-04 17:00:26.099857 : /root/workspace/PhoenixWingDailySRv6Test/na_lab/pytest_env/../..//sonic-mgmt/ansible/vars/topo_ciscovs-7nodes.yml : vms_yml {'PE1': {'vlans': [28], 'vm_offset': 0}, 'PE2': {'vlans': [29], 'vm_offset': 1}, 'PE3': {'vlans': [30], 'vm_offset': 2}, 'P3': {'vlans': [31], 'vm_offset': 3}, 'P2': {'vlans': [16], 'vm_offset': 4}, 'P4': {'vlans': [17], 'vm_offset': 5}}, len 6
2025-12-04 17:00:26.099903 : Run sudo ls -l  /etc/sonic/frr/* for range(0, 6)
2025-12-04 17:00:26.099913 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.51 sudo ls -l  /etc/sonic/frr/*'"
2025-12-04 17:00:27.243935 : Run uptime for range(0, 1)
2025-12-04 17:00:27.243968 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.125 uptime'"
2025-12-04 17:00:28.305683 : Current directory /root/workspace/PhoenixWingDailySRv6Test/na_lab/pytest_env
{'conf-name': 'vms-kvm-t0', 'group-name': 'vms6-1', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-64', 'group-name': 'vms6-1', 'topo': 't0-64', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-64-32', 'group-name': 'vms6-1', 'topo': 't0-64-32', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t1-lag', 'group-name': 'vms6-2', 'topo': 't1-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-02', 'ptf_ip': '10.250.0.106/24', 'ptf_ipv6': 'fec0::ffff:afa:6/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-03'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-2', 'group-name': 'vms6-3', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-03', 'ptf_ip': '10.250.0.108/24', 'ptf_ipv6': 'fec0::ffff:afa:8/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-04'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dual-t0', 'group-name': 'vms6-4', 'topo': 'dualtor', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-04', 'ptf_ip': '10.250.0.109/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'server': 'server_1', 'vm_base': 'VM0108', 'dut': ['vlab-05', 'vlab-06'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Dual-TOR testbed'}
{'conf-name': 'vms-kvm-multi-asic-t1-lag', 'group-name': 'vms6-4', 'topo': 't1-64-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-05', 'ptf_ip': '10.250.0.110/24', 'ptf_ipv6': 'fec0::ffff:afa:a/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-07'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests multi-asic virtual switch vm'}
{'conf-name': 'vms-kvm-four-asic-t1-lag', 'group-name': 'vms6-4', 'topo': 't1-8-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-05', 'ptf_ip': '10.250.0.110/24', 'ptf_ipv6': 'fec0::ffff:afa:a/64', 'server': 'server_1', 'vm_base': 'VM0128', 'dut': ['vlab-08'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests multi-asic virtual switch vm'}
{'conf-name': 'vms-kvm-t2', 'group-name': 'vms6-4', 'topo': 't2-vs', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-04', 'ptf_ip': '10.250.0.109/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-t2-01', 'vlab-t2-02', 'vlab-t2-sup'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'T2 Virtual chassis'}
{'conf-name': 'vms-kvm-t0-3', 'group-name': 'vms6-6', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-06', 'ptf_ip': '10.250.0.116/24', 'ptf_ipv6': 'fec0::ffff:afb:2/64', 'server': 'server_1', 'vm_base': 'VM0132', 'dut': ['vlab-09'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-4', 'group-name': 'vms6-7', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-07', 'ptf_ip': '10.250.0.118/24', 'ptf_ipv6': 'fec0::ffff:afb:4/64', 'server': 'server_1', 'vm_base': 'VM0136', 'dut': ['vlab-10'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dual-mixed', 'group-name': 'vms6-8', 'topo': 'dualtor-mixed', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-08', 'ptf_ip': '10.250.0.119/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'netns_mgmt_ip': '10.250.0.126/24', 'server': 'server_1', 'vm_base': 'VM0140', 'dut': ['vlab-11', 'vlab-12'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Dual-TOR-Mixed testbed'}
{'conf-name': '8000e-t0', 'group-name': 'vms8-1', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-8k-01', 'ptf_ip': '10.250.0.202/24', 'ptf_ipv6': 'fec0::ffff:afc:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-8k-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests 8000e sonic device'}
{'conf-name': '8000e-t1', 'group-name': 'vms8-1', 'topo': 't1', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-8k-01', 'ptf_ip': '10.250.0.202/24', 'ptf_ipv6': 'fec0::ffff:afc:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-8k-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests 8000e sonic device'}
{'conf-name': 'vms-kvm-wan-pub', 'group-name': 'vms6-1', 'topo': 'wan-pub', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-4link', 'group-name': 'vms6-1', 'topo': 'wan-4link', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-pub-cisco', 'group-name': 'vms6-1', 'topo': 'wan-pub-cisco', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-2dut', 'group-name': 'vms6-1', 'topo': 'wan-2dut', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01', 'vlab-03'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-3link-tg', 'group-name': 'vms6-1', 'topo': 'wan-3link-tg', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-ecmp', 'group-name': 'vms6-1', 'topo': 'wan-ecmp', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01', 'vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-pub-isis', 'group-name': 'vms6-1', 'topo': 'wan-pub-isis', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dpu', 'group-name': 'vms6-1', 'topo': 'dpu', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': False, 'comment': 'Tests virtual switch vm as DPU'}
{'conf-name': 'vms-kvm-ciscovs-7nodes', 'group-name': 'vms9-1', 'topo': 'ciscovs-7nodes', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-c-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes'}
{'conf-name': 'vms-kvm-ciscovs-5nodes', 'group-name': 'vms9-1', 'topo': 'ciscovs-5nodes', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-c-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 5 nodes'}
 09:00:28 up 42 min,  0 user,  load average: 22.74, 22.75, 21.42
CONTAINER ID   IMAGE                                COMMAND                  CREATED          STATUS          PORTS     NAMES
1443e180a7ee   docker-snmp:latest                   "/usr/bin/docker-snm…"   29 minutes ago   Up 20 minutes             snmp
80183fe585f1   docker-platform-monitor:latest       "/usr/bin/docker_ini…"   29 minutes ago   Up 20 minutes             pmon
9b77da99d81b   docker-sonic-mgmt-framework:latest   "/usr/local/bin/supe…"   29 minutes ago   Up 20 minutes             mgmt-framework
38b83dd83c35   docker-lldp:latest                   "/usr/bin/docker-lld…"   29 minutes ago   Up 20 minutes             lldp
bde323e2d573   docker-sonic-gnmi:latest             "/usr/local/bin/supe…"   29 minutes ago   Up 21 minutes             gnmi
31e8e4b47a80   docker-router-advertiser:latest      "/usr/bin/docker-ini…"   32 minutes ago   Up 23 minutes             radv
255514e7c26f   docker-eventd:latest                 "/usr/local/bin/supe…"   32 minutes ago   Up 23 minutes             eventd
967e5aff3ad7   docker-fpm-frr:latest                "/usr/bin/docker_ini…"   32 minutes ago   Up 23 minutes             bgp
979ed51e6445   docker-syncd-ciscovs:latest          "/usr/bin/docker_ini…"   32 minutes ago   Up 24 minutes             syncd
659061c9231a   docker-teamd:latest                  "/usr/local/bin/supe…"   32 minutes ago   Up 24 minutes             teamd
465ac8fed76c   docker-sysmgr:latest                 "/usr/local/bin/supe…"   32 minutes ago   Up 24 minutes             sysmgr
40dde0a5dd36   docker-orchagent:latest              "/usr/bin/docker-ini…"   32 minutes ago   Up 24 minutes             swss
7376415772ad   docker-database:latest               "/usr/local/bin/dock…"   32 minutes ago   Up 32 minutes             database
2025-12-04 17:00:28.353036 : /root/workspace/PhoenixWingDailySRv6Test/na_lab/pytest_env/../..//sonic-mgmt/ansible/vars/topo_ciscovs-7nodes.yml : vms_yml {'PE1': {'vlans': [28], 'vm_offset': 0}, 'PE2': {'vlans': [29], 'vm_offset': 1}, 'PE3': {'vlans': [30], 'vm_offset': 2}, 'P3': {'vlans': [31], 'vm_offset': 3}, 'P2': {'vlans': [16], 'vm_offset': 4}, 'P4': {'vlans': [17], 'vm_offset': 5}}, len 6
2025-12-04 17:00:28.353099 : Run uptime for range(0, 6)
2025-12-04 17:00:28.353109 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.51 uptime'"
2025-12-04 17:00:29.327985 : Run docker ps for range(0, 1)
2025-12-04 17:00:29.328017 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.125 docker ps'"
2025-12-04 17:00:30.458097 : Current directory /root/workspace/PhoenixWingDailySRv6Test/na_lab/pytest_env
{'conf-name': 'vms-kvm-t0', 'group-name': 'vms6-1', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-64', 'group-name': 'vms6-1', 'topo': 't0-64', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-64-32', 'group-name': 'vms6-1', 'topo': 't0-64-32', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t1-lag', 'group-name': 'vms6-2', 'topo': 't1-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-02', 'ptf_ip': '10.250.0.106/24', 'ptf_ipv6': 'fec0::ffff:afa:6/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-03'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-2', 'group-name': 'vms6-3', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-03', 'ptf_ip': '10.250.0.108/24', 'ptf_ipv6': 'fec0::ffff:afa:8/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-04'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dual-t0', 'group-name': 'vms6-4', 'topo': 'dualtor', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-04', 'ptf_ip': '10.250.0.109/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'server': 'server_1', 'vm_base': 'VM0108', 'dut': ['vlab-05', 'vlab-06'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Dual-TOR testbed'}
{'conf-name': 'vms-kvm-multi-asic-t1-lag', 'group-name': 'vms6-4', 'topo': 't1-64-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-05', 'ptf_ip': '10.250.0.110/24', 'ptf_ipv6': 'fec0::ffff:afa:a/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-07'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests multi-asic virtual switch vm'}
{'conf-name': 'vms-kvm-four-asic-t1-lag', 'group-name': 'vms6-4', 'topo': 't1-8-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-05', 'ptf_ip': '10.250.0.110/24', 'ptf_ipv6': 'fec0::ffff:afa:a/64', 'server': 'server_1', 'vm_base': 'VM0128', 'dut': ['vlab-08'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests multi-asic virtual switch vm'}
{'conf-name': 'vms-kvm-t2', 'group-name': 'vms6-4', 'topo': 't2-vs', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-04', 'ptf_ip': '10.250.0.109/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-t2-01', 'vlab-t2-02', 'vlab-t2-sup'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'T2 Virtual chassis'}
{'conf-name': 'vms-kvm-t0-3', 'group-name': 'vms6-6', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-06', 'ptf_ip': '10.250.0.116/24', 'ptf_ipv6': 'fec0::ffff:afb:2/64', 'server': 'server_1', 'vm_base': 'VM0132', 'dut': ['vlab-09'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-4', 'group-name': 'vms6-7', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-07', 'ptf_ip': '10.250.0.118/24', 'ptf_ipv6': 'fec0::ffff:afb:4/64', 'server': 'server_1', 'vm_base': 'VM0136', 'dut': ['vlab-10'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dual-mixed', 'group-name': 'vms6-8', 'topo': 'dualtor-mixed', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-08', 'ptf_ip': '10.250.0.119/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'netns_mgmt_ip': '10.250.0.126/24', 'server': 'server_1', 'vm_base': 'VM0140', 'dut': ['vlab-11', 'vlab-12'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Dual-TOR-Mixed testbed'}
{'conf-name': '8000e-t0', 'group-name': 'vms8-1', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-8k-01', 'ptf_ip': '10.250.0.202/24', 'ptf_ipv6': 'fec0::ffff:afc:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-8k-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests 8000e sonic device'}
{'conf-name': '8000e-t1', 'group-name': 'vms8-1', 'topo': 't1', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-8k-01', 'ptf_ip': '10.250.0.202/24', 'ptf_ipv6': 'fec0::ffff:afc:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-8k-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests 8000e sonic device'}
{'conf-name': 'vms-kvm-wan-pub', 'group-name': 'vms6-1', 'topo': 'wan-pub', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-4link', 'group-name': 'vms6-1', 'topo': 'wan-4link', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-pub-cisco', 'group-name': 'vms6-1', 'topo': 'wan-pub-cisco', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-2dut', 'group-name': 'vms6-1', 'topo': 'wan-2dut', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01', 'vlab-03'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-3link-tg', 'group-name': 'vms6-1', 'topo': 'wan-3link-tg', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-ecmp', 'group-name': 'vms6-1', 'topo': 'wan-ecmp', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01', 'vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
CONTAINER ID   IMAGE                                COMMAND                  CREATED          STATUS          PORTS     NAMES
14fe199861a0   docker-snmp:latest                   "/usr/bin/docker-snm…"   37 minutes ago   Up 6 minutes              snmp
18b8ca903473   docker-platform-monitor:latest       "/usr/bin/docker_ini…"   37 minutes ago   Up 26 minutes             pmon
8b35a075c97a   docker-sonic-mgmt-framework:latest   "/usr/local/bin/supe…"   37 minutes ago   Up 26 minutes             mgmt-framework
3e191c875983   docker-lldp:latest                   "/usr/bin/docker-lld…"   37 minutes ago   Up 26 minutes             lldp
0309c16ba7cf   docker-sonic-gnmi:latest             "/usr/local/bin/supe…"   37 minutes ago   Up 26 minutes             gnmi
a61a4a4e2ce8   docker-router-advertiser:latest      "/usr/bin/docker-ini…"   41 minutes ago   Up 6 minutes              radv
1c62d80ee8a3   docker-fpm-frr:latest                "/usr/bin/docker_ini…"   41 minutes ago   Up 6 minutes              bgp
f81bfa0f28a5   docker-syncd-ciscovs:latest          "/usr/bin/docker_ini…"   41 minutes ago   Up 6 minutes              syncd
dab0bdef88de   docker-teamd:latest                  "/usr/local/bin/supe…"   41 minutes ago   Up 6 minutes              teamd
e35ebfad9e76   docker-sysmgr:latest                 "/usr/local/bin/supe…"   41 minutes ago   Up 29 minutes             sysmgr
b489faf056ca   docker-eventd:latest                 "/usr/local/bin/supe…"   41 minutes ago   Up 29 minutes             eventd
5d8e4a167a05   docker-orchagent:latest              "/usr/bin/docker-ini…"   41 minutes ago   Up 6 minutes              swss
0d8ca630c329   docker-database:latest               "/usr/local/bin/dock…"   41 minutes ago   Up 41 minutes             database

SONiC Software Version: SONiC.phoenixwing_08192025.384-dirty-20251202.084825
SONiC OS Version: 12
Distribution: Debian 12.12
Kernel: 6.1.0-29-2-amd64
Build commit: 885eae54f
Build date: Tue Dec  2 09:58:10 UTC 2025
Built by: joy@joy

Platform: x86_64-kvm_x86_64-r0
HwSKU: cisco-8101-p4-32x100-vs
ASIC: cisco-ngdp-vs
ASIC Count: 1
Serial Number: N/A
Model Number: N/A
Hardware Revision: N/A
Uptime: 09:00:32 up 35 min,  0 user,  load average: 24.57, 24.28, 21.28
Date: Thu 04 Dec 2025 09:00:32

Docker images:
REPOSITORY                    TAG                                              IMAGE ID       SIZE
docker-macsec                 latest                                           8922bfb3fb75   319MB
docker-macsec                 phoenixwing_08192025.384-dirty-20251202.084825   8922bfb3fb75   319MB
docker-dhcp-relay             latest                                           aaba8d83c448   295MB
docker-dhcp-relay             phoenixwing_08192025.384-dirty-20251202.084825   aaba8d83c448   295MB
docker-teamd                  latest                                           f49c66de6ccd   316MB
docker-teamd                  phoenixwing_08192025.384-dirty-20251202.084825   f49c66de6ccd   316MB
docker-sysmgr                 latest                                           53cd1475b2a7   298MB
docker-sysmgr                 phoenixwing_08192025.384-dirty-20251202.084825   53cd1475b2a7   298MB
docker-sonic-mgmt-framework   latest                                           b03fc415056f   380MB
docker-sonic-mgmt-framework   phoenixwing_08192025.384-dirty-20251202.084825   b03fc415056f   380MB
docker-snmp                   latest                                           e1fc1d78905a   311MB
docker-snmp                   phoenixwing_08192025.384-dirty-20251202.084825   e1fc1d78905a   311MB
docker-sflow                  latest                                           91b4ebdeb025   317MB
docker-sflow                  phoenixwing_08192025.384-dirty-20251202.084825   91b4ebdeb025   317MB
docker-router-advertiser      latest                                           55150179cbf5   286MB
docker-router-advertiser      phoenixwing_08192025.384-dirty-20251202.084825   55150179cbf5   286MB
docker-platform-monitor       latest                                           7a3be2d81f94   420MB
docker-platform-monitor       phoenixwing_08192025.384-dirty-20251202.084825   7a3be2d81f94   420MB
docker-orchagent              latest                                           c5d3081188ab   328MB
docker-orchagent              phoenixwing_08192025.384-dirty-20251202.084825   c5d3081188ab   328MB
docker-nat                    latest                                           6ceca7bbccd4   319MB
docker-nat                    phoenixwing_08192025.384-dirty-20251202.084825   6ceca7bbccd4   319MB
docker-mux                    latest                                           cb5b1097b140   338MB
docker-mux                    phoenixwing_08192025.384-dirty-20251202.084825   cb5b1097b140   338MB
docker-lldp                   latest                                           521912c35d16   332MB
docker-lldp                   phoenixwing_08192025.384-dirty-20251202.084825   521912c35d16   332MB
docker-sonic-gnmi             latest                                           77e6bc5ba7aa   402MB
docker-sonic-gnmi             phoenixwing_08192025.384-dirty-20251202.084825   77e6bc5ba7aa   402MB
docker-gnmi-watchdog          latest                                           3299c6e0cbfb   294MB
docker-gnmi-watchdog          phoenixwing_08192025.384-dirty-20251202.084825   3299c6e0cbfb   294MB
docker-fpm-frr                latest                                           4475594c8d9f   365MB
docker-fpm-frr                phoenixwing_08192025.384-dirty-20251202.084825   4475594c8d9f   365MB
docker-eventd                 latest                                           84d8e51b7698   286MB
docker-eventd                 phoenixwing_08192025.384-dirty-20251202.084825   84d8e51b7698   286MB
docker-database               latest                                           19ac949fd780   299MB
docker-database               phoenixwing_08192025.384-dirty-20251202.084825   19ac949fd780   299MB
docker-sonic-bmp              latest                                           73c8a3d341e0   288MB
docker-sonic-bmp              phoenixwing_08192025.384-dirty-20251202.084825   73c8a3d341e0   288MB
docker-bmp-watchdog           latest                                           5f2a374311a2   286MB
docker-bmp-watchdog           phoenixwing_08192025.384-dirty-20251202.084825   5f2a374311a2   286MB
docker-auditd                 latest                                           c0c4dc50ec7e   286MB
docker-auditd                 phoenixwing_08192025.384-dirty-20251202.084825   c0c4dc50ec7e   286MB
docker-auditd-watchdog        latest                                           a2f36a025290   289MB
docker-auditd-watchdog        phoenixwing_08192025.384-dirty-20251202.084825   a2f36a025290   289MB
docker-syncd-ciscovs          latest                                           8df786aecda9   1.26GB
docker-syncd-ciscovs          phoenixwing_08192025.384-dirty-20251202.084825   8df786aecda9   1.26GB

{'conf-name': 'vms-kvm-wan-pub-isis', 'group-name': 'vms6-1', 'topo': 'wan-pub-isis', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dpu', 'group-name': 'vms6-1', 'topo': 'dpu', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': False, 'comment': 'Tests virtual switch vm as DPU'}
{'conf-name': 'vms-kvm-ciscovs-7nodes', 'group-name': 'vms9-1', 'topo': 'ciscovs-7nodes', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-c-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes'}
{'conf-name': 'vms-kvm-ciscovs-5nodes', 'group-name': 'vms9-1', 'topo': 'ciscovs-5nodes', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-c-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 5 nodes'}
2025-12-04 17:00:30.505594 : /root/workspace/PhoenixWingDailySRv6Test/na_lab/pytest_env/../..//sonic-mgmt/ansible/vars/topo_ciscovs-7nodes.yml : vms_yml {'PE1': {'vlans': [28], 'vm_offset': 0}, 'PE2': {'vlans': [29], 'vm_offset': 1}, 'PE3': {'vlans': [30], 'vm_offset': 2}, 'P3': {'vlans': [31], 'vm_offset': 3}, 'P2': {'vlans': [16], 'vm_offset': 4}, 'P4': {'vlans': [17], 'vm_offset': 5}}, len 6
2025-12-04 17:00:30.505641 : Run docker ps for range(0, 6)
2025-12-04 17:00:30.505651 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.51 docker ps'"
2025-12-04 17:00:31.644158 : Run show version for range(0, 1)
2025-12-04 17:00:31.644190 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.125 show version'"
2025-12-04 17:00:33.263393 : Current directory /root/workspace/PhoenixWingDailySRv6Test/na_lab/pytest_env
{'conf-name': 'vms-kvm-t0', 'group-name': 'vms6-1', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-64', 'group-name': 'vms6-1', 'topo': 't0-64', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-64-32', 'group-name': 'vms6-1', 'topo': 't0-64-32', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t1-lag', 'group-name': 'vms6-2', 'topo': 't1-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-02', 'ptf_ip': '10.250.0.106/24', 'ptf_ipv6': 'fec0::ffff:afa:6/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-03'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-2', 'group-name': 'vms6-3', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-03', 'ptf_ip': '10.250.0.108/24', 'ptf_ipv6': 'fec0::ffff:afa:8/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-04'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dual-t0', 'group-name': 'vms6-4', 'topo': 'dualtor', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-04', 'ptf_ip': '10.250.0.109/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'server': 'server_1', 'vm_base': 'VM0108', 'dut': ['vlab-05', 'vlab-06'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Dual-TOR testbed'}
{'conf-name': 'vms-kvm-multi-asic-t1-lag', 'group-name': 'vms6-4', 'topo': 't1-64-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-05', 'ptf_ip': '10.250.0.110/24', 'ptf_ipv6': 'fec0::ffff:afa:a/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-07'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests multi-asic virtual switch vm'}
{'conf-name': 'vms-kvm-four-asic-t1-lag', 'group-name': 'vms6-4', 'topo': 't1-8-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-05', 'ptf_ip': '10.250.0.110/24', 'ptf_ipv6': 'fec0::ffff:afa:a/64', 'server': 'server_1', 'vm_base': 'VM0128', 'dut': ['vlab-08'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests multi-asic virtual switch vm'}
{'conf-name': 'vms-kvm-t2', 'group-name': 'vms6-4', 'topo': 't2-vs', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-04', 'ptf_ip': '10.250.0.109/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-t2-01', 'vlab-t2-02', 'vlab-t2-sup'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'T2 Virtual chassis'}
{'conf-name': 'vms-kvm-t0-3', 'group-name': 'vms6-6', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-06', 'ptf_ip': '10.250.0.116/24', 'ptf_ipv6': 'fec0::ffff:afb:2/64', 'server': 'server_1', 'vm_base': 'VM0132', 'dut': ['vlab-09'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-4', 'group-name': 'vms6-7', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-07', 'ptf_ip': '10.250.0.118/24', 'ptf_ipv6': 'fec0::ffff:afb:4/64', 'server': 'server_1', 'vm_base': 'VM0136', 'dut': ['vlab-10'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dual-mixed', 'group-name': 'vms6-8', 'topo': 'dualtor-mixed', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-08', 'ptf_ip': '10.250.0.119/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'netns_mgmt_ip': '10.250.0.126/24', 'server': 'server_1', 'vm_base': 'VM0140', 'dut': ['vlab-11', 'vlab-12'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Dual-TOR-Mixed testbed'}
{'conf-name': '8000e-t0', 'group-name': 'vms8-1', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-8k-01', 'ptf_ip': '10.250.0.202/24', 'ptf_ipv6': 'fec0::ffff:afc:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-8k-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests 8000e sonic device'}
{'conf-name': '8000e-t1', 'group-name': 'vms8-1', 'topo': 't1', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-8k-01', 'ptf_ip': '10.250.0.202/24', 'ptf_ipv6': 'fec0::ffff:afc:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-8k-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests 8000e sonic device'}
{'conf-name': 'vms-kvm-wan-pub', 'group-name': 'vms6-1', 'topo': 'wan-pub', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-4link', 'group-name': 'vms6-1', 'topo': 'wan-4link', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}

SONiC Software Version: SONiC.phoenixwing_08192025.384-dirty-20251202.084825
SONiC OS Version: 12
Distribution: Debian 12.12
Kernel: 6.1.0-29-2-amd64
Build commit: 885eae54f
Build date: Tue Dec  2 09:58:10 UTC 2025
Built by: joy@joy

Platform: x86_64-kvm_x86_64-r0
HwSKU: cisco-8101-p4-32x100-vs
ASIC: cisco-ngdp-vs
ASIC Count: 1
Serial Number: N/A
Model Number: N/A
Hardware Revision: N/A
Uptime: 09:00:33 up 42 min,  0 user,  load average: 23.48, 22.90, 21.47
Date: Thu 04 Dec 2025 09:00:33

Docker images:
REPOSITORY                    TAG                                              IMAGE ID       SIZE
docker-macsec                 latest                                           8922bfb3fb75   319MB
docker-macsec                 phoenixwing_08192025.384-dirty-20251202.084825   8922bfb3fb75   319MB
docker-dhcp-relay             latest                                           aaba8d83c448   295MB
docker-dhcp-relay             phoenixwing_08192025.384-dirty-20251202.084825   aaba8d83c448   295MB
docker-teamd                  latest                                           f49c66de6ccd   316MB
docker-teamd                  phoenixwing_08192025.384-dirty-20251202.084825   f49c66de6ccd   316MB
docker-sysmgr                 latest                                           53cd1475b2a7   298MB
docker-sysmgr                 phoenixwing_08192025.384-dirty-20251202.084825   53cd1475b2a7   298MB
docker-sonic-mgmt-framework   latest                                           b03fc415056f   380MB
docker-sonic-mgmt-framework   phoenixwing_08192025.384-dirty-20251202.084825   b03fc415056f   380MB
docker-snmp                   latest                                           e1fc1d78905a   311MB
docker-snmp                   phoenixwing_08192025.384-dirty-20251202.084825   e1fc1d78905a   311MB
docker-sflow                  latest                                           91b4ebdeb025   317MB
docker-sflow                  phoenixwing_08192025.384-dirty-20251202.084825   91b4ebdeb025   317MB
docker-router-advertiser      latest                                           55150179cbf5   286MB
docker-router-advertiser      phoenixwing_08192025.384-dirty-20251202.084825   55150179cbf5   286MB
docker-platform-monitor       latest                                           7a3be2d81f94   420MB
docker-platform-monitor       phoenixwing_08192025.384-dirty-20251202.084825   7a3be2d81f94   420MB
docker-orchagent              latest                                           c5d3081188ab   328MB
docker-orchagent              phoenixwing_08192025.384-dirty-20251202.084825   c5d3081188ab   328MB
docker-nat                    latest                                           6ceca7bbccd4   319MB
docker-nat                    phoenixwing_08192025.384-dirty-20251202.084825   6ceca7bbccd4   319MB
docker-mux                    latest                                           cb5b1097b140   338MB
docker-mux                    phoenixwing_08192025.384-dirty-20251202.084825   cb5b1097b140   338MB
docker-lldp                   latest                                           521912c35d16   332MB
docker-lldp                   phoenixwing_08192025.384-dirty-20251202.084825   521912c35d16   332MB
docker-sonic-gnmi             latest                                           77e6bc5ba7aa   402MB
docker-sonic-gnmi             phoenixwing_08192025.384-dirty-20251202.084825   77e6bc5ba7aa   402MB
docker-gnmi-watchdog          latest                                           3299c6e0cbfb   294MB
docker-gnmi-watchdog          phoenixwing_08192025.384-dirty-20251202.084825   3299c6e0cbfb   294MB
docker-fpm-frr                latest                                           4475594c8d9f   365MB
docker-fpm-frr                phoenixwing_08192025.384-dirty-20251202.084825   4475594c8d9f   365MB
docker-eventd                 latest                                           84d8e51b7698   286MB
docker-eventd                 phoenixwing_08192025.384-dirty-20251202.084825   84d8e51b7698   286MB
docker-database               latest                                           19ac949fd780   299MB
docker-database               phoenixwing_08192025.384-dirty-20251202.084825   19ac949fd780   299MB
docker-sonic-bmp              latest                                           73c8a3d341e0   288MB
docker-sonic-bmp              phoenixwing_08192025.384-dirty-20251202.084825   73c8a3d341e0   288MB
docker-bmp-watchdog           latest                                           5f2a374311a2   286MB
docker-bmp-watchdog           phoenixwing_08192025.384-dirty-20251202.084825   5f2a374311a2   286MB
docker-auditd                 latest                                           c0c4dc50ec7e   286MB
docker-auditd                 phoenixwing_08192025.384-dirty-20251202.084825   c0c4dc50ec7e   286MB
docker-auditd-watchdog        latest                                           a2f36a025290   289MB
docker-auditd-watchdog        phoenixwing_08192025.384-dirty-20251202.084825   a2f36a025290   289MB
docker-syncd-ciscovs          latest                                           8df786aecda9   1.26GB
docker-syncd-ciscovs          phoenixwing_08192025.384-dirty-20251202.084825   8df786aecda9   1.26GB

  Interface                Lanes    Speed    MTU    FEC        Alias    Vlan    Oper    Admin    Type    Asym PFC
-----------  -------------------  -------  -----  -----  -----------  ------  ------  -------  ------  ----------
  Ethernet0  2304,2305,2306,2307     100G   9100    N/A    Ethernet0  routed      up       up     N/A         N/A
  Ethernet4  2320,2321,2322,2323     100G   9100    N/A    Ethernet4  routed      up       up     N/A         N/A
  Ethernet8  2312,2313,2314,2315     100G   9100    N/A    Ethernet8  routed      up       up     N/A         N/A
 Ethernet12  2056,2057,2058,2059     100G   9100    N/A   Ethernet12  routed      up       up     N/A         N/A
 Ethernet16  1792,1793,1794,1795     100G   9100    N/A   Ethernet16  routed      up       up     N/A         N/A
 Ethernet20  2048,2049,2050,2051     100G   9100    N/A   Ethernet20  routed      up       up     N/A         N/A
 Ethernet24  2560,2561,2562,2563     100G   9100    N/A   Ethernet24  routed      up       up     N/A         N/A
 Ethernet28  2824,2825,2826,2827     100G   9100    N/A   Ethernet28  routed      up       up     N/A         N/A
 Ethernet32  2832,2833,2834,2835     100G   9100    N/A   Ethernet32  routed      up       up     N/A         N/A
 Ethernet36  2816,2817,2818,2819     100G   9100    N/A   Ethernet36  routed      up       up     N/A         N/A
 Ethernet40  2568,2569,2570,2571     100G   9100    N/A   Ethernet40  routed      up       up     N/A         N/A
 Ethernet44  2576,2577,2578,2579     100G   9100    N/A   Ethernet44  routed      up       up     N/A         N/A
 Ethernet48  1536,1537,1538,1539     100G   9100    N/A   Ethernet48  routed      up       up     N/A         N/A
 Ethernet52  1800,1801,1802,1803     100G   9100    N/A   Ethernet52  routed      up       up     N/A         N/A
 Ethernet56  1552,1553,1554,1555     100G   9100    N/A   Ethernet56  routed      up       up     N/A         N/A
 Ethernet60  1544,1545,1546,1547     100G   9100    N/A   Ethernet60  routed      up       up     N/A         N/A
 Ethernet64  1296,1297,1298,1299     100G   9100    N/A   Ethernet64  routed      up       up     N/A         N/A
 Ethernet68  1288,1289,1290,1291     100G   9100    N/A   Ethernet68  routed      up       up     N/A         N/A
 Ethernet72  1280,1281,1282,1283     100G   9100    N/A   Ethernet72  routed      up       up     N/A         N/A
 Ethernet76  1032,1033,1034,1035     100G   9100    N/A   Ethernet76  routed      up       up     N/A         N/A
 Ethernet80      264,265,266,267     100G   9100    N/A   Ethernet80  routed      up       up     N/A         N/A
 Ethernet84      272,273,274,275     100G   9100    N/A   Ethernet84  routed      up       up     N/A         N/A
 Ethernet88          16,17,18,19     100G   9100    N/A   Ethernet88  routed      up       up     N/A         N/A
 Ethernet92              0,1,2,3     100G   9100    N/A   Ethernet92  routed      up       up     N/A         N/A
 Ethernet96      256,257,258,259     100G   9100    N/A   Ethernet96  routed      up       up     N/A         N/A
Ethernet100            8,9,10,11     100G   9100    N/A  Ethernet100  routed      up       up     N/A         N/A
Ethernet104  1024,1025,1026,1027     100G   9100    N/A  Ethernet104  routed      up       up     N/A         N/A
Ethernet108      768,769,770,771     100G   9100    N/A  Ethernet108  routed      up       up     N/A         N/A
Ethernet112      524,525,526,527     100G   9100    N/A  Ethernet112  routed      up       up     N/A         N/A
Ethernet116      776,777,778,779     100G   9100    N/A  Ethernet116  routed      up       up     N/A         N/A
Ethernet120      516,517,518,519     100G   9100    N/A  Ethernet120  routed      up       up     N/A         N/A
Ethernet124      528,529,530,531     100G   9100    N/A  Ethernet124  routed      up       up     N/A         N/A
{'conf-name': 'vms-kvm-wan-pub-cisco', 'group-name': 'vms6-1', 'topo': 'wan-pub-cisco', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-2dut', 'group-name': 'vms6-1', 'topo': 'wan-2dut', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01', 'vlab-03'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-3link-tg', 'group-name': 'vms6-1', 'topo': 'wan-3link-tg', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-ecmp', 'group-name': 'vms6-1', 'topo': 'wan-ecmp', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01', 'vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-pub-isis', 'group-name': 'vms6-1', 'topo': 'wan-pub-isis', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dpu', 'group-name': 'vms6-1', 'topo': 'dpu', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': False, 'comment': 'Tests virtual switch vm as DPU'}
{'conf-name': 'vms-kvm-ciscovs-7nodes', 'group-name': 'vms9-1', 'topo': 'ciscovs-7nodes', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-c-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes'}
{'conf-name': 'vms-kvm-ciscovs-5nodes', 'group-name': 'vms9-1', 'topo': 'ciscovs-5nodes', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-c-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 5 nodes'}
2025-12-04 17:00:33.310489 : /root/workspace/PhoenixWingDailySRv6Test/na_lab/pytest_env/../..//sonic-mgmt/ansible/vars/topo_ciscovs-7nodes.yml : vms_yml {'PE1': {'vlans': [28], 'vm_offset': 0}, 'PE2': {'vlans': [29], 'vm_offset': 1}, 'PE3': {'vlans': [30], 'vm_offset': 2}, 'P3': {'vlans': [31], 'vm_offset': 3}, 'P2': {'vlans': [16], 'vm_offset': 4}, 'P4': {'vlans': [17], 'vm_offset': 5}}, len 6
2025-12-04 17:00:33.310537 : Run show version for range(0, 6)
2025-12-04 17:00:33.310546 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.51 show version'"
2025-12-04 17:00:34.878825 : Run show interface status for range(0, 1)
2025-12-04 17:00:34.878856 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.125 show interface status'"
2025-12-04 17:00:38.034609 : Current directory /root/workspace/PhoenixWingDailySRv6Test/na_lab/pytest_env
{'conf-name': 'vms-kvm-t0', 'group-name': 'vms6-1', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-64', 'group-name': 'vms6-1', 'topo': 't0-64', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-64-32', 'group-name': 'vms6-1', 'topo': 't0-64-32', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t1-lag', 'group-name': 'vms6-2', 'topo': 't1-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-02', 'ptf_ip': '10.250.0.106/24', 'ptf_ipv6': 'fec0::ffff:afa:6/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-03'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-2', 'group-name': 'vms6-3', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-03', 'ptf_ip': '10.250.0.108/24', 'ptf_ipv6': 'fec0::ffff:afa:8/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-04'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dual-t0', 'group-name': 'vms6-4', 'topo': 'dualtor', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-04', 'ptf_ip': '10.250.0.109/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'server': 'server_1', 'vm_base': 'VM0108', 'dut': ['vlab-05', 'vlab-06'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Dual-TOR testbed'}
{'conf-name': 'vms-kvm-multi-asic-t1-lag', 'group-name': 'vms6-4', 'topo': 't1-64-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-05', 'ptf_ip': '10.250.0.110/24', 'ptf_ipv6': 'fec0::ffff:afa:a/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-07'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests multi-asic virtual switch vm'}
{'conf-name': 'vms-kvm-four-asic-t1-lag', 'group-name': 'vms6-4', 'topo': 't1-8-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-05', 'ptf_ip': '10.250.0.110/24', 'ptf_ipv6': 'fec0::ffff:afa:a/64', 'server': 'server_1', 'vm_base': 'VM0128', 'dut': ['vlab-08'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests multi-asic virtual switch vm'}
{'conf-name': 'vms-kvm-t2', 'group-name': 'vms6-4', 'topo': 't2-vs', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-04', 'ptf_ip': '10.250.0.109/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-t2-01', 'vlab-t2-02', 'vlab-t2-sup'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'T2 Virtual chassis'}
{'conf-name': 'vms-kvm-t0-3', 'group-name': 'vms6-6', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-06', 'ptf_ip': '10.250.0.116/24', 'ptf_ipv6': 'fec0::ffff:afb:2/64', 'server': 'server_1', 'vm_base': 'VM0132', 'dut': ['vlab-09'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-4', 'group-name': 'vms6-7', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-07', 'ptf_ip': '10.250.0.118/24', 'ptf_ipv6': 'fec0::ffff:afb:4/64', 'server': 'server_1', 'vm_base': 'VM0136', 'dut': ['vlab-10'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
  Interface    Lanes    Speed    MTU    FEC    Alias    Vlan    Oper    Admin    Type    Asym PFC
-----------  -------  -------  -----  -----  -------  ------  ------  -------  ------  ----------
Interface    Master    IPv4 address/mask    Admin/Oper    BGP Neighbor    Neighbor IP
-----------  --------  -------------------  ------------  --------------  -------------
Loopback0              100.1.0.35/32        up/up         N/A             N/A
docker0                240.127.1.1/24       up/down       N/A             N/A
eth0                   10.250.0.125/24      up/up         N/A             N/A
lo                     127.0.0.1/16         up/up         N/A             N/A
{'conf-name': 'vms-kvm-dual-mixed', 'group-name': 'vms6-8', 'topo': 'dualtor-mixed', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-08', 'ptf_ip': '10.250.0.119/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'netns_mgmt_ip': '10.250.0.126/24', 'server': 'server_1', 'vm_base': 'VM0140', 'dut': ['vlab-11', 'vlab-12'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Dual-TOR-Mixed testbed'}
{'conf-name': '8000e-t0', 'group-name': 'vms8-1', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-8k-01', 'ptf_ip': '10.250.0.202/24', 'ptf_ipv6': 'fec0::ffff:afc:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-8k-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests 8000e sonic device'}
{'conf-name': '8000e-t1', 'group-name': 'vms8-1', 'topo': 't1', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-8k-01', 'ptf_ip': '10.250.0.202/24', 'ptf_ipv6': 'fec0::ffff:afc:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-8k-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests 8000e sonic device'}
{'conf-name': 'vms-kvm-wan-pub', 'group-name': 'vms6-1', 'topo': 'wan-pub', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-4link', 'group-name': 'vms6-1', 'topo': 'wan-4link', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-pub-cisco', 'group-name': 'vms6-1', 'topo': 'wan-pub-cisco', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-2dut', 'group-name': 'vms6-1', 'topo': 'wan-2dut', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01', 'vlab-03'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-3link-tg', 'group-name': 'vms6-1', 'topo': 'wan-3link-tg', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-ecmp', 'group-name': 'vms6-1', 'topo': 'wan-ecmp', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01', 'vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-pub-isis', 'group-name': 'vms6-1', 'topo': 'wan-pub-isis', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dpu', 'group-name': 'vms6-1', 'topo': 'dpu', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': False, 'comment': 'Tests virtual switch vm as DPU'}
{'conf-name': 'vms-kvm-ciscovs-7nodes', 'group-name': 'vms9-1', 'topo': 'ciscovs-7nodes', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-c-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes'}
{'conf-name': 'vms-kvm-ciscovs-5nodes', 'group-name': 'vms9-1', 'topo': 'ciscovs-5nodes', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-c-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 5 nodes'}
2025-12-04 17:00:38.081418 : /root/workspace/PhoenixWingDailySRv6Test/na_lab/pytest_env/../..//sonic-mgmt/ansible/vars/topo_ciscovs-7nodes.yml : vms_yml {'PE1': {'vlans': [28], 'vm_offset': 0}, 'PE2': {'vlans': [29], 'vm_offset': 1}, 'PE3': {'vlans': [30], 'vm_offset': 2}, 'P3': {'vlans': [31], 'vm_offset': 3}, 'P2': {'vlans': [16], 'vm_offset': 4}, 'P4': {'vlans': [17], 'vm_offset': 5}}, len 6
2025-12-04 17:00:38.081485 : Run show interface status for range(0, 6)
2025-12-04 17:00:38.081495 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.51 show interface status'"
2025-12-04 17:00:40.340489 : Run show ip interface for range(0, 1)
2025-12-04 17:00:40.340519 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.125 show ip interface'"
2025-12-04 17:00:42.733962 : Current directory /root/workspace/PhoenixWingDailySRv6Test/na_lab/pytest_env
{'conf-name': 'vms-kvm-t0', 'group-name': 'vms6-1', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-64', 'group-name': 'vms6-1', 'topo': 't0-64', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-64-32', 'group-name': 'vms6-1', 'topo': 't0-64-32', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t1-lag', 'group-name': 'vms6-2', 'topo': 't1-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-02', 'ptf_ip': '10.250.0.106/24', 'ptf_ipv6': 'fec0::ffff:afa:6/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-03'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-2', 'group-name': 'vms6-3', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-03', 'ptf_ip': '10.250.0.108/24', 'ptf_ipv6': 'fec0::ffff:afa:8/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-04'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dual-t0', 'group-name': 'vms6-4', 'topo': 'dualtor', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-04', 'ptf_ip': '10.250.0.109/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'server': 'server_1', 'vm_base': 'VM0108', 'dut': ['vlab-05', 'vlab-06'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Dual-TOR testbed'}
Interface    Master    IPv4 address/mask    Admin/Oper    BGP Neighbor    Neighbor IP
-----------  --------  -------------------  ------------  --------------  -------------
Loopback0              100.1.0.29/32        up/up         N/A             N/A
docker0                240.127.1.1/24       up/down       N/A             N/A
eth0                   10.250.0.51/24       up/up         N/A             N/A
lo                     127.0.0.1/16         up/up         N/A             N/A
Flags: A - active, I - inactive, Up - up, Dw - Down, N/A - not available,
       S - selected, D - deselected, * - not synced
No.    Team Dev    Protocol    Ports
-----  ----------  ----------  -------
{'conf-name': 'vms-kvm-multi-asic-t1-lag', 'group-name': 'vms6-4', 'topo': 't1-64-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-05', 'ptf_ip': '10.250.0.110/24', 'ptf_ipv6': 'fec0::ffff:afa:a/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-07'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests multi-asic virtual switch vm'}
{'conf-name': 'vms-kvm-four-asic-t1-lag', 'group-name': 'vms6-4', 'topo': 't1-8-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-05', 'ptf_ip': '10.250.0.110/24', 'ptf_ipv6': 'fec0::ffff:afa:a/64', 'server': 'server_1', 'vm_base': 'VM0128', 'dut': ['vlab-08'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests multi-asic virtual switch vm'}
{'conf-name': 'vms-kvm-t2', 'group-name': 'vms6-4', 'topo': 't2-vs', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-04', 'ptf_ip': '10.250.0.109/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-t2-01', 'vlab-t2-02', 'vlab-t2-sup'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'T2 Virtual chassis'}
{'conf-name': 'vms-kvm-t0-3', 'group-name': 'vms6-6', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-06', 'ptf_ip': '10.250.0.116/24', 'ptf_ipv6': 'fec0::ffff:afb:2/64', 'server': 'server_1', 'vm_base': 'VM0132', 'dut': ['vlab-09'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-4', 'group-name': 'vms6-7', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-07', 'ptf_ip': '10.250.0.118/24', 'ptf_ipv6': 'fec0::ffff:afb:4/64', 'server': 'server_1', 'vm_base': 'VM0136', 'dut': ['vlab-10'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dual-mixed', 'group-name': 'vms6-8', 'topo': 'dualtor-mixed', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-08', 'ptf_ip': '10.250.0.119/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'netns_mgmt_ip': '10.250.0.126/24', 'server': 'server_1', 'vm_base': 'VM0140', 'dut': ['vlab-11', 'vlab-12'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Dual-TOR-Mixed testbed'}
{'conf-name': '8000e-t0', 'group-name': 'vms8-1', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-8k-01', 'ptf_ip': '10.250.0.202/24', 'ptf_ipv6': 'fec0::ffff:afc:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-8k-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests 8000e sonic device'}
{'conf-name': '8000e-t1', 'group-name': 'vms8-1', 'topo': 't1', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-8k-01', 'ptf_ip': '10.250.0.202/24', 'ptf_ipv6': 'fec0::ffff:afc:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-8k-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests 8000e sonic device'}
{'conf-name': 'vms-kvm-wan-pub', 'group-name': 'vms6-1', 'topo': 'wan-pub', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-4link', 'group-name': 'vms6-1', 'topo': 'wan-4link', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-pub-cisco', 'group-name': 'vms6-1', 'topo': 'wan-pub-cisco', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-2dut', 'group-name': 'vms6-1', 'topo': 'wan-2dut', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01', 'vlab-03'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-3link-tg', 'group-name': 'vms6-1', 'topo': 'wan-3link-tg', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-ecmp', 'group-name': 'vms6-1', 'topo': 'wan-ecmp', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01', 'vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-pub-isis', 'group-name': 'vms6-1', 'topo': 'wan-pub-isis', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dpu', 'group-name': 'vms6-1', 'topo': 'dpu', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': False, 'comment': 'Tests virtual switch vm as DPU'}
{'conf-name': 'vms-kvm-ciscovs-7nodes', 'group-name': 'vms9-1', 'topo': 'ciscovs-7nodes', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-c-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes'}
{'conf-name': 'vms-kvm-ciscovs-5nodes', 'group-name': 'vms9-1', 'topo': 'ciscovs-5nodes', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-c-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 5 nodes'}
2025-12-04 17:00:42.781297 : /root/workspace/PhoenixWingDailySRv6Test/na_lab/pytest_env/../..//sonic-mgmt/ansible/vars/topo_ciscovs-7nodes.yml : vms_yml {'PE1': {'vlans': [28], 'vm_offset': 0}, 'PE2': {'vlans': [29], 'vm_offset': 1}, 'PE3': {'vlans': [30], 'vm_offset': 2}, 'P3': {'vlans': [31], 'vm_offset': 3}, 'P2': {'vlans': [16], 'vm_offset': 4}, 'P4': {'vlans': [17], 'vm_offset': 5}}, len 6
2025-12-04 17:00:42.781365 : Run show ip interface for range(0, 6)
2025-12-04 17:00:42.781378 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.51 show ip interface'"
2025-12-04 17:00:44.749215 : Run show interface portchannel for range(0, 1)
2025-12-04 17:00:44.749248 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.125 show interface portchannel'"
2025-12-04 17:00:46.506049 : Current directory /root/workspace/PhoenixWingDailySRv6Test/na_lab/pytest_env
{'conf-name': 'vms-kvm-t0', 'group-name': 'vms6-1', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-64', 'group-name': 'vms6-1', 'topo': 't0-64', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-64-32', 'group-name': 'vms6-1', 'topo': 't0-64-32', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t1-lag', 'group-name': 'vms6-2', 'topo': 't1-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-02', 'ptf_ip': '10.250.0.106/24', 'ptf_ipv6': 'fec0::ffff:afa:6/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-03'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-2', 'group-name': 'vms6-3', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-03', 'ptf_ip': '10.250.0.108/24', 'ptf_ipv6': 'fec0::ffff:afa:8/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-04'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dual-t0', 'group-name': 'vms6-4', 'topo': 'dualtor', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-04', 'ptf_ip': '10.250.0.109/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'server': 'server_1', 'vm_base': 'VM0108', 'dut': ['vlab-05', 'vlab-06'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Dual-TOR testbed'}
{'conf-name': 'vms-kvm-multi-asic-t1-lag', 'group-name': 'vms6-4', 'topo': 't1-64-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-05', 'ptf_ip': '10.250.0.110/24', 'ptf_ipv6': 'fec0::ffff:afa:a/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-07'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests multi-asic virtual switch vm'}
{'conf-name': 'vms-kvm-four-asic-t1-lag', 'group-name': 'vms6-4', 'topo': 't1-8-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-05', 'ptf_ip': '10.250.0.110/24', 'ptf_ipv6': 'fec0::ffff:afa:a/64', 'server': 'server_1', 'vm_base': 'VM0128', 'dut': ['vlab-08'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests multi-asic virtual switch vm'}
{'conf-name': 'vms-kvm-t2', 'group-name': 'vms6-4', 'topo': 't2-vs', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-04', 'ptf_ip': '10.250.0.109/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-t2-01', 'vlab-t2-02', 'vlab-t2-sup'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'T2 Virtual chassis'}
{'conf-name': 'vms-kvm-t0-3', 'group-name': 'vms6-6', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-06', 'ptf_ip': '10.250.0.116/24', 'ptf_ipv6': 'fec0::ffff:afb:2/64', 'server': 'server_1', 'vm_base': 'VM0132', 'dut': ['vlab-09'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-4', 'group-name': 'vms6-7', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-07', 'ptf_ip': '10.250.0.118/24', 'ptf_ipv6': 'fec0::ffff:afb:4/64', 'server': 'server_1', 'vm_base': 'VM0136', 'dut': ['vlab-10'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dual-mixed', 'group-name': 'vms6-8', 'topo': 'dualtor-mixed', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-08', 'ptf_ip': '10.250.0.119/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'netns_mgmt_ip': '10.250.0.126/24', 'server': 'server_1', 'vm_base': 'VM0140', 'dut': ['vlab-11', 'vlab-12'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Dual-TOR-Mixed testbed'}
{'conf-name': '8000e-t0', 'group-name': 'vms8-1', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-8k-01', 'ptf_ip': '10.250.0.202/24', 'ptf_ipv6': 'fec0::ffff:afc:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-8k-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests 8000e sonic device'}
{'conf-name': '8000e-t1', 'group-name': 'vms8-1', 'topo': 't1', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-8k-01', 'ptf_ip': '10.250.0.202/24', 'ptf_ipv6': 'fec0::ffff:afc:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-8k-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests 8000e sonic device'}
{'conf-name': 'vms-kvm-wan-pub', 'group-name': 'vms6-1', 'topo': 'wan-pub', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-4link', 'group-name': 'vms6-1', 'topo': 'wan-4link', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-pub-cisco', 'group-name': 'vms6-1', 'topo': 'wan-pub-cisco', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-2dut', 'group-name': 'vms6-1', 'topo': 'wan-2dut', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01', 'vlab-03'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-3link-tg', 'group-name': 'vms6-1', 'topo': 'wan-3link-tg', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-ecmp', 'group-name': 'vms6-1', 'topo': 'wan-ecmp', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01', 'vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-pub-isis', 'group-name': 'vms6-1', 'topo': 'wan-pub-isis', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dpu', 'group-name': 'vms6-1', 'topo': 'dpu', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': False, 'comment': 'Tests virtual switch vm as DPU'}
{'conf-name': 'vms-kvm-ciscovs-7nodes', 'group-name': 'vms9-1', 'topo': 'ciscovs-7nodes', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-c-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes'}
{'conf-name': 'vms-kvm-ciscovs-5nodes', 'group-name': 'vms9-1', 'topo': 'ciscovs-5nodes', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-c-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 5 nodes'}
Flags: A - active, I - inactive, Up - up, Dw - Down, N/A - not available,
       S - selected, D - deselected, * - not synced
No.    Team Dev    Protocol    Ports
-----  ----------  ----------  -------
Codes: K - kernel route, C - connected, S - static, R - RIP,
       O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
       T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
       F - PBR, f - OpenFabric,
       > - selected route, * - FIB route, q - queued route, r - rejected route

C>*10.250.0.0/24 is directly connected, eth0, 00:24:13
2025-12-04 17:00:46.554912 : /root/workspace/PhoenixWingDailySRv6Test/na_lab/pytest_env/../..//sonic-mgmt/ansible/vars/topo_ciscovs-7nodes.yml : vms_yml {'PE1': {'vlans': [28], 'vm_offset': 0}, 'PE2': {'vlans': [29], 'vm_offset': 1}, 'PE3': {'vlans': [30], 'vm_offset': 2}, 'P3': {'vlans': [31], 'vm_offset': 3}, 'P2': {'vlans': [16], 'vm_offset': 4}, 'P4': {'vlans': [17], 'vm_offset': 5}}, len 6
2025-12-04 17:00:46.555012 : Run show interface portchannel for range(0, 6)
2025-12-04 17:00:46.555024 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.51 show interface portchannel'"
2025-12-04 17:00:47.989644 : Run show ip route for range(0, 1)
2025-12-04 17:00:47.989676 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.125 show ip route'"
2025-12-04 17:00:49.730518 : Current directory /root/workspace/PhoenixWingDailySRv6Test/na_lab/pytest_env
{'conf-name': 'vms-kvm-t0', 'group-name': 'vms6-1', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-64', 'group-name': 'vms6-1', 'topo': 't0-64', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-64-32', 'group-name': 'vms6-1', 'topo': 't0-64-32', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t1-lag', 'group-name': 'vms6-2', 'topo': 't1-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-02', 'ptf_ip': '10.250.0.106/24', 'ptf_ipv6': 'fec0::ffff:afa:6/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-03'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-2', 'group-name': 'vms6-3', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-03', 'ptf_ip': '10.250.0.108/24', 'ptf_ipv6': 'fec0::ffff:afa:8/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-04'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dual-t0', 'group-name': 'vms6-4', 'topo': 'dualtor', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-04', 'ptf_ip': '10.250.0.109/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'server': 'server_1', 'vm_base': 'VM0108', 'dut': ['vlab-05', 'vlab-06'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Dual-TOR testbed'}
{'conf-name': 'vms-kvm-multi-asic-t1-lag', 'group-name': 'vms6-4', 'topo': 't1-64-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-05', 'ptf_ip': '10.250.0.110/24', 'ptf_ipv6': 'fec0::ffff:afa:a/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-07'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests multi-asic virtual switch vm'}
{'conf-name': 'vms-kvm-four-asic-t1-lag', 'group-name': 'vms6-4', 'topo': 't1-8-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-05', 'ptf_ip': '10.250.0.110/24', 'ptf_ipv6': 'fec0::ffff:afa:a/64', 'server': 'server_1', 'vm_base': 'VM0128', 'dut': ['vlab-08'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests multi-asic virtual switch vm'}
{'conf-name': 'vms-kvm-t2', 'group-name': 'vms6-4', 'topo': 't2-vs', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-04', 'ptf_ip': '10.250.0.109/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-t2-01', 'vlab-t2-02', 'vlab-t2-sup'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'T2 Virtual chassis'}
{'conf-name': 'vms-kvm-t0-3', 'group-name': 'vms6-6', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-06', 'ptf_ip': '10.250.0.116/24', 'ptf_ipv6': 'fec0::ffff:afb:2/64', 'server': 'server_1', 'vm_base': 'VM0132', 'dut': ['vlab-09'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-4', 'group-name': 'vms6-7', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-07', 'ptf_ip': '10.250.0.118/24', 'ptf_ipv6': 'fec0::ffff:afb:4/64', 'server': 'server_1', 'vm_base': 'VM0136', 'dut': ['vlab-10'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dual-mixed', 'group-name': 'vms6-8', 'topo': 'dualtor-mixed', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-08', 'ptf_ip': '10.250.0.119/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'netns_mgmt_ip': '10.250.0.126/24', 'server': 'server_1', 'vm_base': 'VM0140', 'dut': ['vlab-11', 'vlab-12'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Dual-TOR-Mixed testbed'}
{'conf-name': '8000e-t0', 'group-name': 'vms8-1', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-8k-01', 'ptf_ip': '10.250.0.202/24', 'ptf_ipv6': 'fec0::ffff:afc:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-8k-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests 8000e sonic device'}
{'conf-name': '8000e-t1', 'group-name': 'vms8-1', 'topo': 't1', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-8k-01', 'ptf_ip': '10.250.0.202/24', 'ptf_ipv6': 'fec0::ffff:afc:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-8k-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests 8000e sonic device'}
{'conf-name': 'vms-kvm-wan-pub', 'group-name': 'vms6-1', 'topo': 'wan-pub', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-4link', 'group-name': 'vms6-1', 'topo': 'wan-4link', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-pub-cisco', 'group-name': 'vms6-1', 'topo': 'wan-pub-cisco', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-2dut', 'group-name': 'vms6-1', 'topo': 'wan-2dut', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01', 'vlab-03'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-3link-tg', 'group-name': 'vms6-1', 'topo': 'wan-3link-tg', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-ecmp', 'group-name': 'vms6-1', 'topo': 'wan-ecmp', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01', 'vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
Codes: K - kernel route, C - connected, S - static, R - RIP,
       O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
       T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
       F - PBR, f - OpenFabric,
       > - selected route, * - FIB route, q - queued route, r - rejected route

C>*10.250.0.0/24 is directly connected, eth0, 00:06:33
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:48:55:b8 brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:3e:16:09 brd ff:ff:ff:ff:ff:ff
4: eth2: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:7a:62:82 brd ff:ff:ff:ff:ff:ff
5: eth3: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:1e:ed:3b brd ff:ff:ff:ff:ff:ff
6: eth4: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:87:61:91 brd ff:ff:ff:ff:ff:ff
7: eth5: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:6a:80:e4 brd ff:ff:ff:ff:ff:ff
8: eth6: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:cd:5c:97 brd ff:ff:ff:ff:ff:ff
9: eth7: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:d9:1b:4e brd ff:ff:ff:ff:ff:ff
10: eth8: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:5c:02:5a brd ff:ff:ff:ff:ff:ff
11: eth9: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:b2:71:ac brd ff:ff:ff:ff:ff:ff
12: eth10: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:34:f7:2b brd ff:ff:ff:ff:ff:ff
13: eth11: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:ba:91:a6 brd ff:ff:ff:ff:ff:ff
14: eth12: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:7e:a2:4e brd ff:ff:ff:ff:ff:ff
15: eth13: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:a4:ec:d1 brd ff:ff:ff:ff:ff:ff
16: eth14: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:d7:f5:9c brd ff:ff:ff:ff:ff:ff
17: eth15: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:07:4a:a0 brd ff:ff:ff:ff:ff:ff
18: eth16: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:6c:a1:1b brd ff:ff:ff:ff:ff:ff
19: eth17: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:31:43:c9 brd ff:ff:ff:ff:ff:ff
20: eth18: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:ed:e1:4d brd ff:ff:ff:ff:ff:ff
21: eth19: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:65:98:41 brd ff:ff:ff:ff:ff:ff
22: eth20: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:2c:72:c8 brd ff:ff:ff:ff:ff:ff
23: eth21: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:eb:55:f5 brd ff:ff:ff:ff:ff:ff
24: eth22: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:fb:bd:96 brd ff:ff:ff:ff:ff:ff
25: eth23: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:c5:ee:f7 brd ff:ff:ff:ff:ff:ff
26: eth24: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:a3:db:fc brd ff:ff:ff:ff:ff:ff
27: eth25: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:ae:b2:34 brd ff:ff:ff:ff:ff:ff
28: eth26: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:59:f3:e1 brd ff:ff:ff:ff:ff:ff
29: eth27: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:db:3e:e4 brd ff:ff:ff:ff:ff:ff
30: eth28: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:a2:23:82 brd ff:ff:ff:ff:ff:ff
31: eth29: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:48:d2:a2 brd ff:ff:ff:ff:ff:ff
32: eth30: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:f0:61:f6 brd ff:ff:ff:ff:ff:ff
33: eth31: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:f3:cd:d7 brd ff:ff:ff:ff:ff:ff
34: eth32: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:3e:d3:d9 brd ff:ff:ff:ff:ff:ff
35: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default 
    link/ether 02:42:be:7d:d3:fe brd ff:ff:ff:ff:ff:ff
36: swveth1@veth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether f6:04:55:eb:92:d3 brd ff:ff:ff:ff:ff:ff
37: veth1@swveth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 2a:1a:d0:75:dc:81 brd ff:ff:ff:ff:ff:ff
38: swveth2@veth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether fe:ac:60:39:07:60 brd ff:ff:ff:ff:ff:ff
39: veth2@swveth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 8a:fc:7e:44:a2:5e brd ff:ff:ff:ff:ff:ff
40: swveth3@veth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether c6:5e:8a:3d:a9:3c brd ff:ff:ff:ff:ff:ff
41: veth3@swveth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether b6:f8:a8:b4:72:4b brd ff:ff:ff:ff:ff:ff
42: swveth4@veth4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 4e:06:d9:9d:da:0f brd ff:ff:ff:ff:ff:ff
43: veth4@swveth4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 42:51:3a:c4:a5:3d brd ff:ff:ff:ff:ff:ff
44: swveth5@veth5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 0a:a8:2a:75:9f:d1 brd ff:ff:ff:ff:ff:ff
45: veth5@swveth5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 1e:40:d2:bd:33:34 brd ff:ff:ff:ff:ff:ff
46: swveth6@veth6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 0e:92:37:93:4f:1d brd ff:ff:ff:ff:ff:ff
47: veth6@swveth6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 3e:48:1d:de:27:4a brd ff:ff:ff:ff:ff:ff
48: swveth7@veth7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 0a:de:c7:73:c9:0d brd ff:ff:ff:ff:ff:ff
49: veth7@swveth7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether f6:67:a3:de:d9:40 brd ff:ff:ff:ff:ff:ff
50: swveth8@veth8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 7a:22:f5:59:09:08 brd ff:ff:ff:ff:ff:ff
51: veth8@swveth8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 56:3e:96:a7:d9:e0 brd ff:ff:ff:ff:ff:ff
52: swveth9@veth9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 7e:d3:da:b1:10:2a brd ff:ff:ff:ff:ff:ff
53: veth9@swveth9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 1a:1a:5c:91:54:0f brd ff:ff:ff:ff:ff:ff
54: swveth10@veth10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 46:a2:39:8d:4d:e8 brd ff:ff:ff:ff:ff:ff
55: veth10@swveth10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether da:68:e7:d8:d4:7f brd ff:ff:ff:ff:ff:ff
56: swveth11@veth11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 22:31:c3:7b:51:97 brd ff:ff:ff:ff:ff:ff
57: veth11@swveth11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether c6:bf:1e:e1:de:97 brd ff:ff:ff:ff:ff:ff
58: swveth12@veth12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether a2:a5:99:85:b5:1f brd ff:ff:ff:ff:ff:ff
59: veth12@swveth12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 26:22:57:92:f3:c1 brd ff:ff:ff:ff:ff:ff
61: swveth13@veth13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether fe:61:66:79:8d:72 brd ff:ff:ff:ff:ff:ff
62: veth13@swveth13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 4a:3b:25:8c:5f:69 brd ff:ff:ff:ff:ff:ff
64: swveth14@veth14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 1a:59:75:35:7b:dc brd ff:ff:ff:ff:ff:ff
65: veth14@swveth14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether e6:d8:50:63:a2:0d brd ff:ff:ff:ff:ff:ff
67: swveth15@veth15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 42:e1:94:c8:89:04 brd ff:ff:ff:ff:ff:ff
68: veth15@swveth15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether ea:40:de:21:f0:b9 brd ff:ff:ff:ff:ff:ff
69: swveth16@veth16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 0a:29:69:ea:b3:47 brd ff:ff:ff:ff:ff:ff
70: veth16@swveth16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether b2:ef:70:98:c2:24 brd ff:ff:ff:ff:ff:ff
71: swveth17@veth17: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 3a:d1:ac:30:05:fe brd ff:ff:ff:ff:ff:ff
72: veth17@swveth17: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether f6:78:a5:ec:c5:11 brd ff:ff:ff:ff:ff:ff
73: swveth18@veth18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 96:01:72:0e:89:5f brd ff:ff:ff:ff:ff:ff
74: veth18@swveth18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 8e:35:8e:32:6a:34 brd ff:ff:ff:ff:ff:ff
75: swveth19@veth19: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether b6:ff:b5:2f:0b:d7 brd ff:ff:ff:ff:ff:ff
76: veth19@swveth19: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 2a:b3:61:fd:2d:f8 brd ff:ff:ff:ff:ff:ff
77: swveth20@veth20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 26:51:c5:bf:b5:51 brd ff:ff:ff:ff:ff:ff
78: veth20@swveth20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether ce:ff:44:b0:da:14 brd ff:ff:ff:ff:ff:ff
79: swveth21@veth21: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 9a:23:da:df:1a:cf brd ff:ff:ff:ff:ff:ff
80: veth21@swveth21: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 3e:18:39:a6:5e:2e brd ff:ff:ff:ff:ff:ff
81: swveth22@veth22: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether de:f9:12:b5:9e:44 brd ff:ff:ff:ff:ff:ff
82: veth22@swveth22: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 2a:83:64:a5:a6:82 brd ff:ff:ff:ff:ff:ff
83: swveth23@veth23: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether f6:1b:e8:25:84:55 brd ff:ff:ff:ff:ff:ff
84: veth23@swveth23: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 2e:b5:d3:53:e3:d7 brd ff:ff:ff:ff:ff:ff
85: swveth24@veth24: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 8a:13:dd:1d:c5:71 brd ff:ff:ff:ff:ff:ff
86: veth24@swveth24: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether c6:68:84:aa:2a:36 brd ff:ff:ff:ff:ff:ff
87: swveth25@veth25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether d2:ee:6f:f4:fc:88 brd ff:ff:ff:ff:ff:ff
88: veth25@swveth25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 06:21:44:3a:5b:a1 brd ff:ff:ff:ff:ff:ff
89: swveth26@veth26: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether fa:19:18:ee:48:de brd ff:ff:ff:ff:ff:ff
90: veth26@swveth26: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether c2:30:8e:58:2b:3a brd ff:ff:ff:ff:ff:ff
91: swveth27@veth27: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 8e:04:30:26:2d:86 brd ff:ff:ff:ff:ff:ff
92: veth27@swveth27: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether b6:29:48:a5:3d:bb brd ff:ff:ff:ff:ff:ff
93: swveth28@veth28: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether ee:fb:ac:19:32:d3 brd ff:ff:ff:ff:ff:ff
94: veth28@swveth28: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether ee:c3:2f:21:1c:f3 brd ff:ff:ff:ff:ff:ff
95: swveth29@veth29: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether d2:1d:19:71:b0:f9 brd ff:ff:ff:ff:ff:ff
96: veth29@swveth29: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 96:0e:67:ec:57:a1 brd ff:ff:ff:ff:ff:ff
97: swveth30@veth30: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether da:af:3a:54:14:5d brd ff:ff:ff:ff:ff:ff
98: veth30@swveth30: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 26:ae:06:6b:7f:c3 brd ff:ff:ff:ff:ff:ff
99: swveth31@veth31: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 32:6a:c8:9b:68:35 brd ff:ff:ff:ff:ff:ff
100: veth31@swveth31: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 0e:02:5c:f4:68:19 brd ff:ff:ff:ff:ff:ff
101: swveth32@veth32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether f6:50:d3:51:c3:a7 brd ff:ff:ff:ff:ff:ff
102: veth32@swveth32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether d6:b7:33:03:36:37 brd ff:ff:ff:ff:ff:ff
103: pimreg@NONE: <NOARP,UP,LOWER_UP> mtu 1472 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/pimreg 
104: Loopback0: <BROADCAST,NOARP,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether f2:4b:53:33:d9:2e brd ff:ff:ff:ff:ff:ff
105: Bridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
106: dummy: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue master Bridge state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether fa:f8:71:9b:e7:70 brd ff:ff:ff:ff:ff:ff
107: Ethernet92: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
108: Ethernet100: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
109: Ethernet88: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
110: Ethernet96: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
111: Ethernet80: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
112: Ethernet84: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
113: Ethernet120: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
114: Ethernet112: <BROADCAST,MULTICAST> mtu 9100 qdisc fq_codel state DOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
115: Ethernet124: <BROADCAST,MULTICAST> mtu 9100 qdisc fq_codel state DOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
116: Ethernet108: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
117: Ethernet116: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
118: Ethernet104: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
119: Ethernet76: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
120: Ethernet72: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
121: Ethernet68: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
122: Ethernet64: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
123: Ethernet48: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
124: Ethernet60: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
125: Ethernet56: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
126: Ethernet16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
127: Ethernet52: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
128: Ethernet20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
129: Ethernet12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
130: Ethernet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
131: Ethernet8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
132: Ethernet4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
133: Ethernet24: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
134: Ethernet40: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
135: Ethernet44: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
136: Ethernet36: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
137: Ethernet28: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
138: Ethernet32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
{'conf-name': 'vms-kvm-wan-pub-isis', 'group-name': 'vms6-1', 'topo': 'wan-pub-isis', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dpu', 'group-name': 'vms6-1', 'topo': 'dpu', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': False, 'comment': 'Tests virtual switch vm as DPU'}
{'conf-name': 'vms-kvm-ciscovs-7nodes', 'group-name': 'vms9-1', 'topo': 'ciscovs-7nodes', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-c-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes'}
{'conf-name': 'vms-kvm-ciscovs-5nodes', 'group-name': 'vms9-1', 'topo': 'ciscovs-5nodes', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-c-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 5 nodes'}
2025-12-04 17:00:49.777847 : /root/workspace/PhoenixWingDailySRv6Test/na_lab/pytest_env/../..//sonic-mgmt/ansible/vars/topo_ciscovs-7nodes.yml : vms_yml {'PE1': {'vlans': [28], 'vm_offset': 0}, 'PE2': {'vlans': [29], 'vm_offset': 1}, 'PE3': {'vlans': [30], 'vm_offset': 2}, 'P3': {'vlans': [31], 'vm_offset': 3}, 'P2': {'vlans': [16], 'vm_offset': 4}, 'P4': {'vlans': [17], 'vm_offset': 5}}, len 6
2025-12-04 17:00:49.777894 : Run show ip route for range(0, 6)
2025-12-04 17:00:49.777903 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.51 show ip route'"
2025-12-04 17:00:51.504019 : Run ip link for range(0, 1)
2025-12-04 17:00:51.504050 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.125 ip link'"
2025-12-04 17:00:52.558268 : Current directory /root/workspace/PhoenixWingDailySRv6Test/na_lab/pytest_env
{'conf-name': 'vms-kvm-t0', 'group-name': 'vms6-1', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-64', 'group-name': 'vms6-1', 'topo': 't0-64', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-64-32', 'group-name': 'vms6-1', 'topo': 't0-64-32', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t1-lag', 'group-name': 'vms6-2', 'topo': 't1-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-02', 'ptf_ip': '10.250.0.106/24', 'ptf_ipv6': 'fec0::ffff:afa:6/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-03'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-2', 'group-name': 'vms6-3', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-03', 'ptf_ip': '10.250.0.108/24', 'ptf_ipv6': 'fec0::ffff:afa:8/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-04'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dual-t0', 'group-name': 'vms6-4', 'topo': 'dualtor', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-04', 'ptf_ip': '10.250.0.109/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'server': 'server_1', 'vm_base': 'VM0108', 'dut': ['vlab-05', 'vlab-06'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Dual-TOR testbed'}
{'conf-name': 'vms-kvm-multi-asic-t1-lag', 'group-name': 'vms6-4', 'topo': 't1-64-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-05', 'ptf_ip': '10.250.0.110/24', 'ptf_ipv6': 'fec0::ffff:afa:a/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-07'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests multi-asic virtual switch vm'}
{'conf-name': 'vms-kvm-four-asic-t1-lag', 'group-name': 'vms6-4', 'topo': 't1-8-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-05', 'ptf_ip': '10.250.0.110/24', 'ptf_ipv6': 'fec0::ffff:afa:a/64', 'server': 'server_1', 'vm_base': 'VM0128', 'dut': ['vlab-08'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests multi-asic virtual switch vm'}
{'conf-name': 'vms-kvm-t2', 'group-name': 'vms6-4', 'topo': 't2-vs', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-04', 'ptf_ip': '10.250.0.109/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-t2-01', 'vlab-t2-02', 'vlab-t2-sup'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'T2 Virtual chassis'}
{'conf-name': 'vms-kvm-t0-3', 'group-name': 'vms6-6', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-06', 'ptf_ip': '10.250.0.116/24', 'ptf_ipv6': 'fec0::ffff:afb:2/64', 'server': 'server_1', 'vm_base': 'VM0132', 'dut': ['vlab-09'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-4', 'group-name': 'vms6-7', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-07', 'ptf_ip': '10.250.0.118/24', 'ptf_ipv6': 'fec0::ffff:afb:4/64', 'server': 'server_1', 'vm_base': 'VM0136', 'dut': ['vlab-10'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dual-mixed', 'group-name': 'vms6-8', 'topo': 'dualtor-mixed', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-08', 'ptf_ip': '10.250.0.119/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'netns_mgmt_ip': '10.250.0.126/24', 'server': 'server_1', 'vm_base': 'VM0140', 'dut': ['vlab-11', 'vlab-12'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Dual-TOR-Mixed testbed'}
{'conf-name': '8000e-t0', 'group-name': 'vms8-1', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-8k-01', 'ptf_ip': '10.250.0.202/24', 'ptf_ipv6': 'fec0::ffff:afc:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-8k-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests 8000e sonic device'}
{'conf-name': '8000e-t1', 'group-name': 'vms8-1', 'topo': 't1', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-8k-01', 'ptf_ip': '10.250.0.202/24', 'ptf_ipv6': 'fec0::ffff:afc:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-8k-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests 8000e sonic device'}
{'conf-name': 'vms-kvm-wan-pub', 'group-name': 'vms6-1', 'topo': 'wan-pub', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-4link', 'group-name': 'vms6-1', 'topo': 'wan-4link', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:b1:b9:e4 brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:61:44:cf brd ff:ff:ff:ff:ff:ff
4: eth2: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:d9:92:0a brd ff:ff:ff:ff:ff:ff
5: eth3: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:42:36:5c brd ff:ff:ff:ff:ff:ff
6: eth4: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:ea:3e:a7 brd ff:ff:ff:ff:ff:ff
7: eth5: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:89:d8:7c brd ff:ff:ff:ff:ff:ff
8: eth6: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:4e:f4:d0 brd ff:ff:ff:ff:ff:ff
9: eth7: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:61:af:d1 brd ff:ff:ff:ff:ff:ff
10: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default 
    link/ether 02:42:74:8d:70:c2 brd ff:ff:ff:ff:ff:ff
11: swveth1@veth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether f6:38:e8:b4:f8:44 brd ff:ff:ff:ff:ff:ff
12: veth1@swveth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 22:de:c6:d2:70:ba brd ff:ff:ff:ff:ff:ff
13: swveth2@veth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether ba:a8:80:5e:06:e6 brd ff:ff:ff:ff:ff:ff
14: veth2@swveth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 02:8b:1d:06:69:15 brd ff:ff:ff:ff:ff:ff
15: swveth3@veth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether b2:8b:70:12:66:00 brd ff:ff:ff:ff:ff:ff
16: veth3@swveth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 56:97:90:f9:5b:68 brd ff:ff:ff:ff:ff:ff
17: swveth4@veth4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 32:3e:30:f6:3b:5b brd ff:ff:ff:ff:ff:ff
18: veth4@swveth4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 9e:00:8a:cc:24:2d brd ff:ff:ff:ff:ff:ff
19: swveth5@veth5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether d6:a1:21:03:df:fd brd ff:ff:ff:ff:ff:ff
20: veth5@swveth5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether b6:05:2a:32:eb:51 brd ff:ff:ff:ff:ff:ff
21: swveth6@veth6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether ca:3f:70:76:19:dd brd ff:ff:ff:ff:ff:ff
22: veth6@swveth6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 2a:98:ed:77:7a:b6 brd ff:ff:ff:ff:ff:ff
23: swveth7@veth7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 32:45:d5:1e:16:1d brd ff:ff:ff:ff:ff:ff
24: veth7@swveth7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether e2:7e:cf:71:52:48 brd ff:ff:ff:ff:ff:ff
25: swveth8@veth8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 9a:d4:b0:7b:65:39 brd ff:ff:ff:ff:ff:ff
26: veth8@swveth8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 0e:78:73:91:51:41 brd ff:ff:ff:ff:ff:ff
27: swveth9@veth9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether ca:06:83:28:e7:dd brd ff:ff:ff:ff:ff:ff
28: veth9@swveth9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 42:6c:8c:51:da:79 brd ff:ff:ff:ff:ff:ff
29: swveth10@veth10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 0a:e6:2e:ae:55:6e brd ff:ff:ff:ff:ff:ff
30: veth10@swveth10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 12:14:6a:64:67:79 brd ff:ff:ff:ff:ff:ff
31: swveth11@veth11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether d2:2c:f3:4c:eb:b3 brd ff:ff:ff:ff:ff:ff
32: veth11@swveth11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 12:11:62:25:88:ad brd ff:ff:ff:ff:ff:ff
33: swveth12@veth12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether aa:7c:72:f1:82:a3 brd ff:ff:ff:ff:ff:ff
34: veth12@swveth12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 4e:6b:9d:11:56:bc brd ff:ff:ff:ff:ff:ff
35: swveth13@veth13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether ee:7f:f5:74:ca:54 brd ff:ff:ff:ff:ff:ff
36: veth13@swveth13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether b2:9a:de:13:56:92 brd ff:ff:ff:ff:ff:ff
37: swveth14@veth14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether d6:c6:40:84:83:0f brd ff:ff:ff:ff:ff:ff
38: veth14@swveth14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether a2:bf:8e:40:b1:d9 brd ff:ff:ff:ff:ff:ff
39: swveth15@veth15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether c6:f1:64:f4:94:17 brd ff:ff:ff:ff:ff:ff
40: veth15@swveth15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 96:d4:89:63:71:97 brd ff:ff:ff:ff:ff:ff
41: swveth16@veth16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 7a:3a:70:be:82:8c brd ff:ff:ff:ff:ff:ff
42: veth16@swveth16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 9a:a9:d0:d2:d3:6c brd ff:ff:ff:ff:ff:ff
43: swveth17@veth17: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether c6:a0:a1:f4:8d:d9 brd ff:ff:ff:ff:ff:ff
44: veth17@swveth17: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 4a:50:55:6a:54:92 brd ff:ff:ff:ff:ff:ff
45: swveth18@veth18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 1e:ab:b7:7e:43:eb brd ff:ff:ff:ff:ff:ff
46: veth18@swveth18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 66:e9:f6:d6:4d:2e brd ff:ff:ff:ff:ff:ff
47: swveth19@veth19: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether a2:88:5a:1a:2f:ce brd ff:ff:ff:ff:ff:ff
48: veth19@swveth19: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether fe:1f:09:9e:15:2c brd ff:ff:ff:ff:ff:ff
49: swveth20@veth20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 46:7b:b6:6e:42:75 brd ff:ff:ff:ff:ff:ff
50: veth20@swveth20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 3e:63:1d:64:c5:5a brd ff:ff:ff:ff:ff:ff
51: swveth21@veth21: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether a6:07:be:03:9f:7a brd ff:ff:ff:ff:ff:ff
52: veth21@swveth21: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether fa:b8:6d:b5:5f:67 brd ff:ff:ff:ff:ff:ff
53: swveth22@veth22: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 7a:26:e7:f1:8d:49 brd ff:ff:ff:ff:ff:ff
54: veth22@swveth22: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 36:67:9b:08:57:0c brd ff:ff:ff:ff:ff:ff
55: swveth23@veth23: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 56:7e:64:b7:0a:18 brd ff:ff:ff:ff:ff:ff
56: veth23@swveth23: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether da:2e:df:5b:1e:51 brd ff:ff:ff:ff:ff:ff
57: swveth24@veth24: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 7e:a7:02:46:e7:01 brd ff:ff:ff:ff:ff:ff
58: veth24@swveth24: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether e6:38:d2:5c:15:4b brd ff:ff:ff:ff:ff:ff
59: swveth25@veth25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 62:e6:6f:11:d4:c1 brd ff:ff:ff:ff:ff:ff
60: veth25@swveth25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether ea:ec:9c:02:10:98 brd ff:ff:ff:ff:ff:ff
61: swveth26@veth26: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 1e:24:81:a3:8c:32 brd ff:ff:ff:ff:ff:ff
62: veth26@swveth26: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether ae:26:20:fb:62:2d brd ff:ff:ff:ff:ff:ff
63: swveth27@veth27: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 16:b2:49:d7:06:87 brd ff:ff:ff:ff:ff:ff
64: veth27@swveth27: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 72:8d:9d:d0:3a:f0 brd ff:ff:ff:ff:ff:ff
65: swveth28@veth28: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 46:b4:e6:e3:f0:bf brd ff:ff:ff:ff:ff:ff
66: veth28@swveth28: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 4a:9c:c9:30:e8:9d brd ff:ff:ff:ff:ff:ff
67: swveth29@veth29: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether a6:a7:b5:4a:a2:b4 brd ff:ff:ff:ff:ff:ff
68: veth29@swveth29: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 0a:60:77:dc:68:db brd ff:ff:ff:ff:ff:ff
69: swveth30@veth30: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 36:6e:f7:4d:5f:78 brd ff:ff:ff:ff:ff:ff
70: veth30@swveth30: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 26:df:e0:61:c7:e2 brd ff:ff:ff:ff:ff:ff
71: swveth31@veth31: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether ca:d1:ef:e5:7b:65 brd ff:ff:ff:ff:ff:ff
72: veth31@swveth31: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether aa:5e:24:9a:c7:21 brd ff:ff:ff:ff:ff:ff
73: swveth32@veth32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether f6:36:d5:9b:3a:9e brd ff:ff:ff:ff:ff:ff
74: veth32@swveth32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether d2:28:71:9f:75:eb brd ff:ff:ff:ff:ff:ff
124: pimreg@NONE: <NOARP,UP,LOWER_UP> mtu 1472 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/pimreg 
127: Bridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:df:0c:4e brd ff:ff:ff:ff:ff:ff
128: Loopback0: <BROADCAST,NOARP,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether d6:44:72:a9:02:c2 brd ff:ff:ff:ff:ff:ff
129: dummy: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue master Bridge state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether a2:91:77:a0:f7:2c brd ff:ff:ff:ff:ff:ff
130: Vrf1: <NOARP,MASTER,UP,LOWER_UP> mtu 65575 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 7e:6f:38:8b:33:1e brd ff:ff:ff:ff:ff:ff
131: Vrf2: <NOARP,MASTER,UP,LOWER_UP> mtu 65575 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 3e:fb:c8:c8:ef:dd brd ff:ff:ff:ff:ff:ff
10.250.0.0/24 dev eth0 proto kernel scope link src 10.250.0.125 
240.127.1.0/24 dev docker0 proto kernel scope link src 240.127.1.1 linkdown 
{'conf-name': 'vms-kvm-wan-pub-cisco', 'group-name': 'vms6-1', 'topo': 'wan-pub-cisco', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-2dut', 'group-name': 'vms6-1', 'topo': 'wan-2dut', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01', 'vlab-03'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-3link-tg', 'group-name': 'vms6-1', 'topo': 'wan-3link-tg', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-ecmp', 'group-name': 'vms6-1', 'topo': 'wan-ecmp', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01', 'vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-pub-isis', 'group-name': 'vms6-1', 'topo': 'wan-pub-isis', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dpu', 'group-name': 'vms6-1', 'topo': 'dpu', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': False, 'comment': 'Tests virtual switch vm as DPU'}
{'conf-name': 'vms-kvm-ciscovs-7nodes', 'group-name': 'vms9-1', 'topo': 'ciscovs-7nodes', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-c-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes'}
{'conf-name': 'vms-kvm-ciscovs-5nodes', 'group-name': 'vms9-1', 'topo': 'ciscovs-5nodes', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-c-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 5 nodes'}
2025-12-04 17:00:52.605638 : /root/workspace/PhoenixWingDailySRv6Test/na_lab/pytest_env/../..//sonic-mgmt/ansible/vars/topo_ciscovs-7nodes.yml : vms_yml {'PE1': {'vlans': [28], 'vm_offset': 0}, 'PE2': {'vlans': [29], 'vm_offset': 1}, 'PE3': {'vlans': [30], 'vm_offset': 2}, 'P3': {'vlans': [31], 'vm_offset': 3}, 'P2': {'vlans': [16], 'vm_offset': 4}, 'P4': {'vlans': [17], 'vm_offset': 5}}, len 6
2025-12-04 17:00:52.605686 : Run ip link for range(0, 6)
2025-12-04 17:00:52.605695 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.51 ip link'"
2025-12-04 17:00:53.623865 : Run ip route for range(0, 1)
2025-12-04 17:00:53.623896 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.125 ip route'"
2025-12-04 17:00:54.878151 : Current directory /root/workspace/PhoenixWingDailySRv6Test/na_lab/pytest_env
{'conf-name': 'vms-kvm-t0', 'group-name': 'vms6-1', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-64', 'group-name': 'vms6-1', 'topo': 't0-64', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-64-32', 'group-name': 'vms6-1', 'topo': 't0-64-32', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t1-lag', 'group-name': 'vms6-2', 'topo': 't1-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-02', 'ptf_ip': '10.250.0.106/24', 'ptf_ipv6': 'fec0::ffff:afa:6/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-03'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-2', 'group-name': 'vms6-3', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-03', 'ptf_ip': '10.250.0.108/24', 'ptf_ipv6': 'fec0::ffff:afa:8/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-04'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dual-t0', 'group-name': 'vms6-4', 'topo': 'dualtor', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-04', 'ptf_ip': '10.250.0.109/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'server': 'server_1', 'vm_base': 'VM0108', 'dut': ['vlab-05', 'vlab-06'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Dual-TOR testbed'}
{'conf-name': 'vms-kvm-multi-asic-t1-lag', 'group-name': 'vms6-4', 'topo': 't1-64-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-05', 'ptf_ip': '10.250.0.110/24', 'ptf_ipv6': 'fec0::ffff:afa:a/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-07'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests multi-asic virtual switch vm'}
{'conf-name': 'vms-kvm-four-asic-t1-lag', 'group-name': 'vms6-4', 'topo': 't1-8-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-05', 'ptf_ip': '10.250.0.110/24', 'ptf_ipv6': 'fec0::ffff:afa:a/64', 'server': 'server_1', 'vm_base': 'VM0128', 'dut': ['vlab-08'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests multi-asic virtual switch vm'}
{'conf-name': 'vms-kvm-t2', 'group-name': 'vms6-4', 'topo': 't2-vs', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-04', 'ptf_ip': '10.250.0.109/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-t2-01', 'vlab-t2-02', 'vlab-t2-sup'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'T2 Virtual chassis'}
{'conf-name': 'vms-kvm-t0-3', 'group-name': 'vms6-6', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-06', 'ptf_ip': '10.250.0.116/24', 'ptf_ipv6': 'fec0::ffff:afb:2/64', 'server': 'server_1', 'vm_base': 'VM0132', 'dut': ['vlab-09'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-4', 'group-name': 'vms6-7', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-07', 'ptf_ip': '10.250.0.118/24', 'ptf_ipv6': 'fec0::ffff:afb:4/64', 'server': 'server_1', 'vm_base': 'VM0136', 'dut': ['vlab-10'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
10.250.0.0/24 dev eth0 proto kernel scope link src 10.250.0.51 
240.127.1.0/24 dev docker0 proto kernel scope link src 240.127.1.1 linkdown 
{'conf-name': 'vms-kvm-dual-mixed', 'group-name': 'vms6-8', 'topo': 'dualtor-mixed', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-08', 'ptf_ip': '10.250.0.119/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'netns_mgmt_ip': '10.250.0.126/24', 'server': 'server_1', 'vm_base': 'VM0140', 'dut': ['vlab-11', 'vlab-12'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Dual-TOR-Mixed testbed'}
{'conf-name': '8000e-t0', 'group-name': 'vms8-1', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-8k-01', 'ptf_ip': '10.250.0.202/24', 'ptf_ipv6': 'fec0::ffff:afc:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-8k-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests 8000e sonic device'}
{'conf-name': '8000e-t1', 'group-name': 'vms8-1', 'topo': 't1', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-8k-01', 'ptf_ip': '10.250.0.202/24', 'ptf_ipv6': 'fec0::ffff:afc:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-8k-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests 8000e sonic device'}
{'conf-name': 'vms-kvm-wan-pub', 'group-name': 'vms6-1', 'topo': 'wan-pub', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-4link', 'group-name': 'vms6-1', 'topo': 'wan-4link', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-pub-cisco', 'group-name': 'vms6-1', 'topo': 'wan-pub-cisco', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-2dut', 'group-name': 'vms6-1', 'topo': 'wan-2dut', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01', 'vlab-03'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-3link-tg', 'group-name': 'vms6-1', 'topo': 'wan-3link-tg', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-ecmp', 'group-name': 'vms6-1', 'topo': 'wan-ecmp', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01', 'vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-pub-isis', 'group-name': 'vms6-1', 'topo': 'wan-pub-isis', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dpu', 'group-name': 'vms6-1', 'topo': 'dpu', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': False, 'comment': 'Tests virtual switch vm as DPU'}
{'conf-name': 'vms-kvm-ciscovs-7nodes', 'group-name': 'vms9-1', 'topo': 'ciscovs-7nodes', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-c-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes'}
{'conf-name': 'vms-kvm-ciscovs-5nodes', 'group-name': 'vms9-1', 'topo': 'ciscovs-5nodes', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-c-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 5 nodes'}
2025-12-04 17:00:54.925077 : /root/workspace/PhoenixWingDailySRv6Test/na_lab/pytest_env/../..//sonic-mgmt/ansible/vars/topo_ciscovs-7nodes.yml : vms_yml {'PE1': {'vlans': [28], 'vm_offset': 0}, 'PE2': {'vlans': [29], 'vm_offset': 1}, 'PE3': {'vlans': [30], 'vm_offset': 2}, 'P3': {'vlans': [31], 'vm_offset': 3}, 'P2': {'vlans': [16], 'vm_offset': 4}, 'P4': {'vlans': [17], 'vm_offset': 5}}, len 6
2025-12-04 17:00:54.925124 : Run ip route for range(0, 6)
2025-12-04 17:00:54.925133 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.51 ip route'"
2025-12-04 17:00:55.949605 : rm -rf /tmp/local_cache//1764837834.587788/
--- 1021.3637938499451 seconds ---
