AzDevOps
bin
env-python3
Looking in indexes: https://mirrors.aliyun.com/pypi/simple/
Collecting paramiko>=3.5.1
  Using cached https://mirrors.aliyun.com/pypi/packages/15/f8/c7bd0ef12954a81a1d3cea60a13946bd9a49a0036a5927770c461eade7ae/paramiko-3.5.1-py3-none-any.whl (227 kB)
Requirement already satisfied: bcrypt>=3.2 in ./env-python3/lib/python3.8/site-packages (from paramiko>=3.5.1) (4.0.1)
Requirement already satisfied: cryptography>=3.3 in ./env-python3/lib/python3.8/site-packages (from paramiko>=3.5.1) (3.3.2)
Requirement already satisfied: pynacl>=1.5 in ./env-python3/lib/python3.8/site-packages (from paramiko>=3.5.1) (1.5.0)
Requirement already satisfied: six>=1.4.1 in ./env-python3/lib/python3.8/site-packages (from cryptography>=3.3->paramiko>=3.5.1) (1.16.0)
Requirement already satisfied: cffi>=1.12 in ./env-python3/lib/python3.8/site-packages (from cryptography>=3.3->paramiko>=3.5.1) (1.15.1)
Requirement already satisfied: pycparser in ./env-python3/lib/python3.8/site-packages (from cffi>=1.12->cryptography>=3.3->paramiko>=3.5.1) (2.21)
Installing collected packages: paramiko
  Attempting uninstall: paramiko
    Found existing installation: paramiko 2.7.1
    Uninstalling paramiko-2.7.1:
      Successfully uninstalled paramiko-2.7.1
Successfully installed paramiko-3.5.1
=== Running tests in groups ===
Running: python3 -m pytest srv6/test_srv6_basic_sanity.py --inventory ../ansible/veos_vtb --host-pattern vlab-c-01 --testbed vms-kvm-ciscovs-7nodes --testbed_file vtestbed.yaml --log-cli-level warning --log-file-level debug --kube_master unset --showlocals --assert plain --show-capture no -rav --allow_recover --ignore=ptftests --ignore=acstests --ignore=saitests --ignore=scripts --ignore=k8s --ignore=sai_qualify --junit-xml=logs/tr.xml --log-file=logs/test.log --skip_sanity --disable_loganalyzer --neighbor_type=sonic
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.3, pluggy-1.0.0
ansible: 2.9.27
rootdir: /data/sonic-mgmt/tests, configfile: pytest.ini
plugins: forked-1.6.0, allure-pytest-2.8.22, xdist-1.28.0, html-3.2.0, ansible-2.2.4, repeat-0.9.1, metadata-2.0.4, celery-4.4.7

----------------------------- live log collection ------------------------------
09:10:33 __init__.load_minigraph_facts            L0245 ERROR  | Failed to load minigraph basic facts, exception: CalledProcessError(2, ['ansible', '-m', 'minigraph_facts', '-i', '../ansible/veos_vtb', 'vlab-c-01', '-a', 'host=vlab-c-01'])
collected 9 items

srv6/test_srv6_basic_sanity.py::test_interface_on_each_node FAILED       [ 11%]
srv6/test_srv6_basic_sanity.py::test_check_bgp_neighbors FAILED          [ 22%]
srv6/test_srv6_basic_sanity.py::test_check_routes FAILED                 [ 33%]
srv6/test_srv6_basic_sanity.py::test_traffic_check_via_trex FAILED       [ 44%]
srv6/test_srv6_basic_sanity.py::test_traffic_check_via_ptf 
-------------------------------- live log call ---------------------------------
09:24:37 __init__.pytest_runtest_call             L0040 ERROR  | Traceback (most recent call last):
  File "/home/ubuntu/env-python3/lib/python3.8/site-packages/_pytest/python.py", line 1761, in runtest
    self.ihook.pytest_pyfunc_call(pyfuncitem=self)
  File "/home/ubuntu/env-python3/lib/python3.8/site-packages/pluggy/_hooks.py", line 265, in __call__
    return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)
  File "/home/ubuntu/env-python3/lib/python3.8/site-packages/pluggy/_manager.py", line 80, in _hookexec
    return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
  File "/home/ubuntu/env-python3/lib/python3.8/site-packages/pluggy/_callers.py", line 60, in _multicall
    return outcome.get_result()
  File "/home/ubuntu/env-python3/lib/python3.8/site-packages/pluggy/_result.py", line 60, in get_result
    raise ex[1].with_traceback(ex[2])
  File "/home/ubuntu/env-python3/lib/python3.8/site-packages/pluggy/_callers.py", line 39, in _multicall
    res = hook_impl.function(*args)
  File "/home/ubuntu/env-python3/lib/python3.8/site-packages/_pytest/python.py", line 192, in pytest_pyfunc_call
    result = testfunction(**testargs)
  File "/data/sonic-mgmt/tests/srv6/test_srv6_basic_sanity.py", line 282, in test_traffic_check_via_ptf
    raise Exception("Traffic test failed")
Exception: Traffic test failed

FAILED                                                                   [ 55%]
srv6/test_srv6_basic_sanity.py::test_traffic_check_local_link_fail_case 
-------------------------------- live log call ---------------------------------
09:24:54 __init__.pytest_runtest_call             L0040 ERROR  | Traceback (most recent call last):
  File "/home/ubuntu/env-python3/lib/python3.8/site-packages/_pytest/python.py", line 1761, in runtest
    self.ihook.pytest_pyfunc_call(pyfuncitem=self)
  File "/home/ubuntu/env-python3/lib/python3.8/site-packages/pluggy/_hooks.py", line 265, in __call__
    return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)
  File "/home/ubuntu/env-python3/lib/python3.8/site-packages/pluggy/_manager.py", line 80, in _hookexec
    return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
  File "/home/ubuntu/env-python3/lib/python3.8/site-packages/pluggy/_callers.py", line 60, in _multicall
    return outcome.get_result()
  File "/home/ubuntu/env-python3/lib/python3.8/site-packages/pluggy/_result.py", line 60, in get_result
    raise ex[1].with_traceback(ex[2])
  File "/home/ubuntu/env-python3/lib/python3.8/site-packages/pluggy/_callers.py", line 39, in _multicall
    res = hook_impl.function(*args)
  File "/home/ubuntu/env-python3/lib/python3.8/site-packages/_pytest/python.py", line 192, in pytest_pyfunc_call
    result = testfunction(**testargs)
  File "/data/sonic-mgmt/tests/srv6/test_srv6_basic_sanity.py", line 306, in test_traffic_check_local_link_fail_case
    pe3.command(cmd)
  File "/data/sonic-mgmt/tests/common/devices/base.py", line 131, in _run
    raise RunAnsibleModuleFail("run module {} failed".format(self.module_name), res)
tests.common.errors.RunAnsibleModuleFail: run module command failed, Ansible Results =>
{"changed": true, "cmd": ["sudo", "ifconfig", "Ethernet4", "down"], "delta": "0:00:00.041869", "end": "2025-12-20 09:24:53.469421", "failed": true, "msg": "non-zero return code", "rc": 255, "start": "2025-12-20 09:24:53.427552", "stderr": "Ethernet4: ERROR while getting interface flags: No such device", "stderr_lines": ["Ethernet4: ERROR while getting interface flags: No such device"], "stdout": "", "stdout_lines": [], "warnings": ["Consider using 'become', 'become_method', and 'become_user' rather than running sudo"]}

FAILED                                                                   [ 66%]
srv6/test_srv6_basic_sanity.py::test_traffic_check_remote_igp_fail_case 
-------------------------------- live log call ---------------------------------
09:25:11 __init__.pytest_runtest_call             L0040 ERROR  | Traceback (most recent call last):
  File "/home/ubuntu/env-python3/lib/python3.8/site-packages/_pytest/python.py", line 1761, in runtest
    self.ihook.pytest_pyfunc_call(pyfuncitem=self)
  File "/home/ubuntu/env-python3/lib/python3.8/site-packages/pluggy/_hooks.py", line 265, in __call__
    return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)
  File "/home/ubuntu/env-python3/lib/python3.8/site-packages/pluggy/_manager.py", line 80, in _hookexec
    return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
  File "/home/ubuntu/env-python3/lib/python3.8/site-packages/pluggy/_callers.py", line 60, in _multicall
    return outcome.get_result()
  File "/home/ubuntu/env-python3/lib/python3.8/site-packages/pluggy/_result.py", line 60, in get_result
    raise ex[1].with_traceback(ex[2])
  File "/home/ubuntu/env-python3/lib/python3.8/site-packages/pluggy/_callers.py", line 39, in _multicall
    res = hook_impl.function(*args)
  File "/home/ubuntu/env-python3/lib/python3.8/site-packages/_pytest/python.py", line 192, in pytest_pyfunc_call
    result = testfunction(**testargs)
  File "/data/sonic-mgmt/tests/srv6/test_srv6_basic_sanity.py", line 377, in test_traffic_check_remote_igp_fail_case
    p4.command(cmd)
  File "/data/sonic-mgmt/tests/common/devices/base.py", line 131, in _run
    raise RunAnsibleModuleFail("run module {} failed".format(self.module_name), res)
tests.common.errors.RunAnsibleModuleFail: run module command failed, Ansible Results =>
{"changed": true, "cmd": ["sudo", "ifconfig", "Ethernet4", "down"], "delta": "0:00:00.046933", "end": "2025-12-20 09:25:11.401511", "failed": true, "msg": "non-zero return code", "rc": 255, "start": "2025-12-20 09:25:11.354578", "stderr": "Ethernet4: ERROR while getting interface flags: No such device", "stderr_lines": ["Ethernet4: ERROR while getting interface flags: No such device"], "stdout": "", "stdout_lines": [], "warnings": ["Consider using 'become', 'become_method', and 'become_user' rather than running sudo"]}

FAILED                                                                   [ 77%]
srv6/test_srv6_basic_sanity.py::test_traffic_check_remote_bgp_fail_case 
-------------------------------- live log call ---------------------------------
09:25:26 __init__.pytest_runtest_call             L0040 ERROR  | Traceback (most recent call last):
  File "/home/ubuntu/env-python3/lib/python3.8/site-packages/_pytest/python.py", line 1761, in runtest
    self.ihook.pytest_pyfunc_call(pyfuncitem=self)
  File "/home/ubuntu/env-python3/lib/python3.8/site-packages/pluggy/_hooks.py", line 265, in __call__
    return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)
  File "/home/ubuntu/env-python3/lib/python3.8/site-packages/pluggy/_manager.py", line 80, in _hookexec
    return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
  File "/home/ubuntu/env-python3/lib/python3.8/site-packages/pluggy/_callers.py", line 60, in _multicall
    return outcome.get_result()
  File "/home/ubuntu/env-python3/lib/python3.8/site-packages/pluggy/_result.py", line 60, in get_result
    raise ex[1].with_traceback(ex[2])
  File "/home/ubuntu/env-python3/lib/python3.8/site-packages/pluggy/_callers.py", line 39, in _multicall
    res = hook_impl.function(*args)
  File "/home/ubuntu/env-python3/lib/python3.8/site-packages/_pytest/python.py", line 192, in pytest_pyfunc_call
    result = testfunction(**testargs)
  File "/data/sonic-mgmt/tests/srv6/test_srv6_basic_sanity.py", line 454, in test_traffic_check_remote_bgp_fail_case
    p3.command(cmd)
  File "/data/sonic-mgmt/tests/common/devices/base.py", line 131, in _run
    raise RunAnsibleModuleFail("run module {} failed".format(self.module_name), res)
tests.common.errors.RunAnsibleModuleFail: run module command failed, Ansible Results =>
{"changed": true, "cmd": ["sudo", "ifconfig", "Ethernet4", "down"], "delta": "0:00:00.044288", "end": "2025-12-20 09:25:26.246692", "failed": true, "msg": "non-zero return code", "rc": 255, "start": "2025-12-20 09:25:26.202404", "stderr": "Ethernet4: ERROR while getting interface flags: No such device", "stderr_lines": ["Ethernet4: ERROR while getting interface flags: No such device"], "stdout": "", "stdout_lines": [], "warnings": ["Consider using 'become', 'become_method', and 'become_user' rather than running sudo"]}

FAILED                                                                   [ 88%]
srv6/test_srv6_basic_sanity.py::test_sbfd_functions SKIPPED (This te...) [100%]

=================================== FAILURES ===================================
_________________________ test_interface_on_each_node __________________________

duthosts = [<MultiAsicSonicHost vlab-c-01>], rand_one_dut_hostname = 'vlab-c-01'
nbrhosts = {'P2': <SonicHost VM0104>, 'P3': <SonicHost VM0103>, 'P4': <SonicHost VM0105>, 'PE1': <SonicHost VM0100>, ...}

    def test_interface_on_each_node(duthosts, rand_one_dut_hostname, nbrhosts):
        for vm_name in test_vm_names:
            nbrhost = nbrhosts[vm_name]['host']
            num, hwsku = find_node_interfaces(nbrhost)
            logger.debug("Get {} interfaces on {}, hwsku {}".format(num, vm_name, hwsku))
            if hwsku == "cisco-8101-p4-32x100-vs":
>               pytest_assert(num == 32)
E               Failed: None

duthosts   = [<MultiAsicSonicHost vlab-c-01>]
hwsku      = 'cisco-8101-p4-32x100-vs'
nbrhost    = <SonicHost VM0100>
nbrhosts   = {'P2': <SonicHost VM0104>, 'P3': <SonicHost VM0103>, 'P4': <SonicHost VM0105>, 'PE1': <SonicHost VM0100>, ...}
num        = 0
rand_one_dut_hostname = 'vlab-c-01'
vm_name    = 'PE1'

srv6/test_srv6_basic_sanity.py:137: Failed
___________________________ test_check_bgp_neighbors ___________________________

duthosts = [<MultiAsicSonicHost vlab-c-01>], rand_one_dut_hostname = 'vlab-c-01'
nbrhosts = {'P2': <SonicHost VM0104>, 'P3': <SonicHost VM0103>, 'P4': <SonicHost VM0105>, 'PE1': <SonicHost VM0100>, ...}

    def test_check_bgp_neighbors(duthosts, rand_one_dut_hostname, nbrhosts):
        logger.info("Check BGP Neighbors")
        # From PE3
        nbrhost = nbrhosts["PE3"]['host']
>       pytest_assert(
            wait_until(
                60, 10, 0, check_bgp_neighbors_func, nbrhost,
                ['2064:100::1d', '2064:200::1e', 'fc06::2', 'fc08::2']
            ),
            "wait for PE3 BGP neighbors up"
        )
E       Failed: wait for PE3 BGP neighbors up

duthosts   = [<MultiAsicSonicHost vlab-c-01>]
nbrhost    = <SonicHost VM0102>
nbrhosts   = {'P2': <SonicHost VM0104>, 'P3': <SonicHost VM0103>, 'P4': <SonicHost VM0105>, 'PE1': <SonicHost VM0100>, ...}
rand_one_dut_hostname = 'vlab-c-01'

srv6/test_srv6_basic_sanity.py:153: Failed
______________________________ test_check_routes _______________________________

duthosts = [<MultiAsicSonicHost vlab-c-01>], rand_one_dut_hostname = 'vlab-c-01'
nbrhosts = {'P2': <SonicHost VM0104>, 'P3': <SonicHost VM0103>, 'P4': <SonicHost VM0105>, 'PE1': <SonicHost VM0100>, ...}

    def test_check_routes(duthosts, rand_one_dut_hostname, nbrhosts):
        global_route = ""
        is_v6 = True
    
        # From PE3
        nbrhost = nbrhosts["PE3"]['host']
        logger.info("Check learnt vpn routes")
        # check remote learnt VPN routes via two PE1 and PE2
        dut1_ips = []
        for x in range(1, num_ce_routes+1):
            ip = "{}.{}/32".format(route_prefix_for_pe1_and_pe2, x)
            dut1_ips.append(ip)
>       check_routes(nbrhost, dut1_ips, ["2064:100::1d", "2064:200::1e"], "Vrf1")

dut1_ips   = ['192.100.0.1/32', '192.100.0.2/32', '192.100.0.3/32', '192.100.0.4/32', '192.100.0.5/32', '192.100.0.6/32', ...]
duthosts   = [<MultiAsicSonicHost vlab-c-01>]
global_route = ''
ip         = '192.100.0.10/32'
is_v6      = True
nbrhost    = <SonicHost VM0102>
nbrhosts   = {'P2': <SonicHost VM0104>, 'P3': <SonicHost VM0103>, 'P4': <SonicHost VM0105>, 'PE1': <SonicHost VM0100>, ...}
rand_one_dut_hostname = 'vlab-c-01'
x          = 10

srv6/test_srv6_basic_sanity.py:198: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

nbrhost = <SonicHost VM0102>
ips = ['192.100.0.1/32', '192.100.0.2/32', '192.100.0.3/32', '192.100.0.4/32', '192.100.0.5/32', '192.100.0.6/32', ...]
nexthops = ['2064:100::1d', '2064:200::1e'], vrf = 'Vrf1', is_v6 = False

    def check_routes(nbrhost, ips, nexthops, vrf="", is_v6=False):
        # Add retry for debugging purpose
        count = 0
        ret = False
    
        #
        # Sleep 10 sec before retrying
        #
        sleep_duration_for_retry = 10
    
        # retry 3 times before claiming failure
        while count < 3 and ret == False:
            ret = check_routes_func(nbrhost, ips, nexthops, vrf, is_v6)
            if not ret:
                count = count + 1
                # sleep make sure all forwarding structures are settled down.
                time.sleep(sleep_duration_for_retry)
                logger.info("Sleep {} seconds to retry round {}".format(sleep_duration_for_retry, count))
    
>       pytest_assert(ret)
E       Failed: None

count      = 3
ips        = ['192.100.0.1/32', '192.100.0.2/32', '192.100.0.3/32', '192.100.0.4/32', '192.100.0.5/32', '192.100.0.6/32', ...]
is_v6      = False
nbrhost    = <SonicHost VM0102>
nexthops   = ['2064:100::1d', '2064:200::1e']
ret        = False
sleep_duration_for_retry = 10
vrf        = 'Vrf1'

srv6/srv6_utils.py:285: Failed
_________________________ test_traffic_check_via_trex __________________________

tbinfo = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}
duthosts = [<MultiAsicSonicHost vlab-c-01>], rand_one_dut_hostname = 'vlab-c-01'
ptfhost = <tests.common.devices.ptf.PTFHost object at 0x7f84e4625e80>
nbrhosts = {'P2': <SonicHost VM0104>, 'P3': <SonicHost VM0103>, 'P4': <SonicHost VM0105>, 'PE1': <SonicHost VM0100>, ...}
ptfadapter = <tests.common.plugins.ptfadapter.ptfadapter.PtfTestAdapter testMethod=runTest>

    def test_traffic_check_via_trex(tbinfo, duthosts, rand_one_dut_hostname, ptfhost, nbrhosts, ptfadapter):
        #
        # Create a packet sending to 192.100.0.1
        #
    
        #add trex tream check
        test_ipv4_dip = "192.100.0.1"
        reset_topo_pkt_counter(ptfadapter) #reset counters before each run
        result = trex_run(test_ipv4_dip, duration = 5) #run sync mode
        #result example {'ptf_tot_tx': 10000, 'ptf_tot_rx': 10000, 'P3_tx_to_PE2': 2500, 'P2_tx_to_PE1': 2500, 'P1_tx_to_PE2': 2500, 'P1_tx_to_PE2': 2500}
        expect_list = {"ptf_tot_rx": 5000, "ptf_tot_tx": 5000, "PE3_tx_to_P4": 2500, "PE3_tx_to_P2": 2500} #check pkt count on any link
        logger.info("test_traffic_check vrf ip:{} test result:{}, expect_list:{}".format(test_ipv4_dip, result, expect_list))
>       pytest_assert(thresh_check(result, expect_list))
E       Failed: None

duthosts   = [<MultiAsicSonicHost vlab-c-01>]
expect_list = {'PE3_tx_to_P2': 2500, 'PE3_tx_to_P4': 2500, 'ptf_tot_rx': 5000, 'ptf_tot_tx': 5000}
nbrhosts   = {'P2': <SonicHost VM0104>, 'P3': <SonicHost VM0103>, 'P4': <SonicHost VM0105>, 'PE1': <SonicHost VM0100>, ...}
ptfadapter = <tests.common.plugins.ptfadapter.ptfadapter.PtfTestAdapter testMethod=runTest>
ptfhost    = <tests.common.devices.ptf.PTFHost object at 0x7f84e4625e80>
rand_one_dut_hostname = 'vlab-c-01'
result     = {'P1_tx_to_PE1': 1, 'P1_tx_to_PE2': 1, 'P2_tx_to_P1': 1, 'P2_tx_to_P3': 0, ...}
tbinfo     = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}
test_ipv4_dip = '192.100.0.1'

srv6/test_srv6_basic_sanity.py:224: Failed
__________________________ test_traffic_check_via_ptf __________________________

tbinfo = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}
duthosts = [<MultiAsicSonicHost vlab-c-01>], rand_one_dut_hostname = 'vlab-c-01'
ptfhost = <tests.common.devices.ptf.PTFHost object at 0x7f84e4625e80>
nbrhosts = {'P2': <SonicHost VM0104>, 'P3': <SonicHost VM0103>, 'P4': <SonicHost VM0105>, 'PE1': <SonicHost VM0100>, ...}
ptfadapter = <tests.common.plugins.ptfadapter.ptfadapter.PtfTestAdapter testMethod=runTest>

    def test_traffic_check_via_ptf(tbinfo, duthosts, rand_one_dut_hostname, ptfhost, nbrhosts, ptfadapter):
        # establish_and_configure_bfd(nbrhosts)
        tcp_pkt0 = simple_tcp_packet(
            ip_src="192.200.0.1",
            ip_dst="192.100.0.1",
            tcp_sport=8888,
            tcp_dport=6666,
            ip_ttl=64
        )
        pkt = tcp_pkt0.copy()
        pkt['Ether'].dst = sender_mac
    
        exp_pkt = tcp_pkt0.copy()
        exp_pkt['IP'].ttl -= 4
        masked2recv = Mask(exp_pkt)
        masked2recv.set_do_not_care_packet(scapy.Ether, "dst")
        masked2recv.set_do_not_care_packet(scapy.Ether, "src")
    
        # Enable tcpdump for debugging purpose, file_loc is host file location
        intf_list = ["VM0102-t1", "VM0102-t3"]
        file_loc = "~/sonic-mgmt/tests/logs/"
        prefix = "test_traffic_check"
        enable_tcpdump(intf_list, file_loc, prefix, True, True)
    
        # Add retry for debugging purpose
        count = 0
        done = False
        while count < 10 and done == False:
            try:
                runSendReceive(pkt, ptf_port_for_backplane, masked2recv, [ptf_port_for_backplane], True, ptfadapter)
                logger.info("Done with traffic run")
                done = True
            except Exception as e:
                count = count + 1
                logger.info("Retry round {}".format(count))
                # sleep make sure all forwarding structures are settled down.
                sleep_duration_for_retry = 60
                time.sleep(sleep_duration_for_retry)
                logger.info("Sleep {} seconds to make sure all forwarding structures are settled down".format(sleep_duration_for_retry))
    
        # Disable tcpdump
        disable_tcpdump(True)
    
        logger.info("Done {} count {}".format(done, count))
        if not done:
>           raise Exception("Traffic test failed")
E           Exception: Traffic test failed

count      = 10
done       = False
duthosts   = [<MultiAsicSonicHost vlab-c-01>]
exp_pkt    = <Ether  dst=00:01:02:03:04:05 src=00:06:07:08:09:0a type=IPv4 |<IP  ihl=None tos=0x0 id=1 frag=0 ttl=60 proto=tcp src=...dst=192.100.0.1 |<TCP  sport=8888 dport=6666 flags=S |<Raw  load='test_srv6_basic_sanity test_srv6_basic_sanity ' |>>>>
file_loc   = '~/sonic-mgmt/tests/logs/'
intf_list  = ['VM0102-t1', 'VM0102-t3']
masked2recv = <ptf.mask.Mask object at 0x7f84ecde1be0>
nbrhosts   = {'P2': <SonicHost VM0104>, 'P3': <SonicHost VM0103>, 'P4': <SonicHost VM0105>, 'PE1': <SonicHost VM0100>, ...}
pkt        = <Ether  dst=52:54:00:df:1c:5e src=00:06:07:08:09:0a type=IPv4 |<IP  ihl=None tos=0x0 id=1 frag=0 ttl=64 proto=tcp src=...dst=192.100.0.1 |<TCP  sport=8888 dport=6666 flags=S |<Raw  load='test_srv6_basic_sanity test_srv6_basic_sanity ' |>>>>
prefix     = 'test_traffic_check'
ptfadapter = <tests.common.plugins.ptfadapter.ptfadapter.PtfTestAdapter testMethod=runTest>
ptfhost    = <tests.common.devices.ptf.PTFHost object at 0x7f84e4625e80>
rand_one_dut_hostname = 'vlab-c-01'
sleep_duration_for_retry = 60
tbinfo     = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}
tcp_pkt0   = <Ether  dst=00:01:02:03:04:05 src=00:06:07:08:09:0a type=IPv4 |<IP  ihl=None tos=0x0 id=1 frag=0 ttl=64 proto=tcp src=...x06\x07\x08\t\n\x0b\x0c\r\x0e\x0f\x10\x11\x12\x13\x14\x15\x16\x17\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f !"#$%&\'()*+,-' |>>>>

srv6/test_srv6_basic_sanity.py:282: Exception
___________________ test_traffic_check_local_link_fail_case ____________________

tbinfo = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}
duthosts = [<MultiAsicSonicHost vlab-c-01>], rand_one_dut_hostname = 'vlab-c-01'
ptfhost = <tests.common.devices.ptf.PTFHost object at 0x7f84e4625e80>
nbrhosts = {'P2': <SonicHost VM0104>, 'P3': <SonicHost VM0103>, 'P4': <SonicHost VM0105>, 'PE1': <SonicHost VM0100>, ...}
ptfadapter = <tests.common.plugins.ptfadapter.ptfadapter.PtfTestAdapter testMethod=runTest>

    def test_traffic_check_local_link_fail_case(tbinfo, duthosts, rand_one_dut_hostname, ptfhost, nbrhosts, ptfadapter):
        filename = "zebra_case_1_locallink_down.txt"
        docker_filename = "/tmp/{}".format(filename)
        vm = "PE3"
        pe3 = nbrhosts[vm]['host']
        p2 = nbrhosts["P2"]['host']
    
        logname = "zebra_case_1_locallink_down_running_log.txt"
        # Recording
        recording_fwding_chain(pe3, logname, "Before starting local link fail case")
        #
        # Turn on frr debug
        #
        turn_on_off_frr_debug(duthosts, rand_one_dut_hostname, nbrhosts, docker_filename, vm, True)
        #
        # shut down the link between PE3 and P2
        #
        cmd = "sudo ifconfig Ethernet4 down"
>       pe3.command(cmd)

cmd        = 'sudo ifconfig Ethernet4 down'
docker_filename = '/tmp/zebra_case_1_locallink_down.txt'
duthosts   = [<MultiAsicSonicHost vlab-c-01>]
filename   = 'zebra_case_1_locallink_down.txt'
logname    = 'zebra_case_1_locallink_down_running_log.txt'
nbrhosts   = {'P2': <SonicHost VM0104>, 'P3': <SonicHost VM0103>, 'P4': <SonicHost VM0105>, 'PE1': <SonicHost VM0100>, ...}
p2         = <SonicHost VM0104>
pe3        = <SonicHost VM0102>
ptfadapter = <tests.common.plugins.ptfadapter.ptfadapter.PtfTestAdapter testMethod=runTest>
ptfhost    = <tests.common.devices.ptf.PTFHost object at 0x7f84e4625e80>
rand_one_dut_hostname = 'vlab-c-01'
tbinfo     = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}
vm         = 'PE3'

srv6/test_srv6_basic_sanity.py:306: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <SonicHost VM0102>, module_args = ['sudo ifconfig Ethernet4 down']
complex_args = {}
previous_frame = <frame at 0x3681b30, file '/data/sonic-mgmt/tests/srv6/test_srv6_basic_sanity.py', line 306, code test_traffic_check_local_link_fail_case>
filename = '/data/sonic-mgmt/tests/srv6/test_srv6_basic_sanity.py'
line_number = 306, function_name = 'test_traffic_check_local_link_fail_case'
lines = ['    pe3.command(cmd)\n'], index = 0, verbose = True
module_ignore_errors = False, module_async = False

    def _run(self, *module_args, **complex_args):
    
        previous_frame = inspect.currentframe().f_back
        filename, line_number, function_name, lines, index = inspect.getframeinfo(previous_frame)
    
        verbose = complex_args.pop('verbose', True)
    
        if verbose:
            logger.debug(
                "{}::{}#{}: [{}] AnsibleModule::{}, args={}, kwargs={}".format(
                    filename,
                    function_name,
                    line_number,
                    self.hostname,
                    self.module_name,
                    json.dumps(module_args, cls=AnsibleHostBase.CustomEncoder),
                    json.dumps(complex_args, cls=AnsibleHostBase.CustomEncoder)
                )
            )
        else:
            logger.debug(
                "{}::{}#{}: [{}] AnsibleModule::{} executing...".format(
                    filename,
                    function_name,
                    line_number,
                    self.hostname,
                    self.module_name
                )
            )
    
        module_ignore_errors = complex_args.pop('module_ignore_errors', False)
        module_async = complex_args.pop('module_async', False)
    
        if module_async:
            def run_module(module_args, complex_args):
                return self.module(*module_args, **complex_args)[self.hostname]
            pool = ThreadPool()
            result = pool.apply_async(run_module, (module_args, complex_args))
            return pool, result
    
        module_args = json.loads(json.dumps(module_args, cls=AnsibleHostBase.CustomEncoder))
        complex_args = json.loads(json.dumps(complex_args, cls=AnsibleHostBase.CustomEncoder))
        res = self.module(*module_args, **complex_args)[self.hostname]
    
        if verbose:
            logger.debug(
                "{}::{}#{}: [{}] AnsibleModule::{} Result => {}".format(
                    filename,
                    function_name,
                    line_number,
                    self.hostname,
                    self.module_name, json.dumps(res, cls=AnsibleHostBase.CustomEncoder)
                )
            )
        else:
            logger.debug(
                "{}::{}#{}: [{}] AnsibleModule::{} done, is_failed={}, rc={}".format(
                    filename,
                    function_name,
                    line_number,
                    self.hostname,
                    self.module_name,
                    res.is_failed,
                    res.get('rc', None)
                )
            )
    
        if (res.is_failed or 'exception' in res) and not module_ignore_errors:
>           raise RunAnsibleModuleFail("run module {} failed".format(self.module_name), res)
E           tests.common.errors.RunAnsibleModuleFail: run module command failed, Ansible Results =>
E           {"changed": true, "cmd": ["sudo", "ifconfig", "Ethernet4", "down"], "delta": "0:00:00.041869", "end": "2025-12-20 09:24:53.469421", "failed": true, "msg": "non-zero return code", "rc": 255, "start": "2025-12-20 09:24:53.427552", "stderr": "Ethernet4: ERROR while getting interface flags: No such device", "stderr_lines": ["Ethernet4: ERROR while getting interface flags: No such device"], "stdout": "", "stdout_lines": [], "warnings": ["Consider using 'become', 'become_method', and 'become_user' rather than running sudo"]}

complex_args = {}
filename   = '/data/sonic-mgmt/tests/srv6/test_srv6_basic_sanity.py'
function_name = 'test_traffic_check_local_link_fail_case'
index      = 0
line_number = 306
lines      = ['    pe3.command(cmd)\n']
module_args = ['sudo ifconfig Ethernet4 down']
module_async = False
module_ignore_errors = False
previous_frame = <frame at 0x3681b30, file '/data/sonic-mgmt/tests/srv6/test_srv6_basic_sanity.py', line 306, code test_traffic_check_local_link_fail_case>
res        = {'failed': True, 'msg': 'non-zero return code', 'cmd': ['sudo', 'ifconfig', 'Ethernet4', 'down'], 'stdout': '', 'stder...nes': [], 'stderr_lines': ['Ethernet4: ERROR while getting interface flags: No such device'], '_ansible_no_log': False}
self       = <SonicHost VM0102>
verbose    = True

common/devices/base.py:131: RunAnsibleModuleFail
___________________ test_traffic_check_remote_igp_fail_case ____________________

tbinfo = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}
duthosts = [<MultiAsicSonicHost vlab-c-01>], rand_one_dut_hostname = 'vlab-c-01'
ptfhost = <tests.common.devices.ptf.PTFHost object at 0x7f84e4625e80>
nbrhosts = {'P2': <SonicHost VM0104>, 'P3': <SonicHost VM0103>, 'P4': <SonicHost VM0105>, 'PE1': <SonicHost VM0100>, ...}
ptfadapter = <tests.common.plugins.ptfadapter.ptfadapter.PtfTestAdapter testMethod=runTest>

    def test_traffic_check_remote_igp_fail_case(tbinfo, duthosts, rand_one_dut_hostname, ptfhost, nbrhosts, ptfadapter):
        filename = "zebra_case_2_remotelink_down.txt"
        docker_filename = "/tmp/{}".format(filename)
        vm = "PE3"
        pe3 = nbrhosts[vm]['host']
    
        logname = "zebra_case_2_remotelink_down_running_log.txt"
        # Recording
        recording_fwding_chain(pe3, logname, "Before starting remote link fail case")
        #
        # Turn on frr debug
        #
        turn_on_off_frr_debug(duthosts, rand_one_dut_hostname, nbrhosts, docker_filename, vm, True)
        #
        # shut down the link between P3 and P1, P2, P4
        #
        p1 = duthosts[rand_one_dut_hostname]
        p2 = nbrhosts["P2"]['host']
        p3 = nbrhosts["P3"]['host']
        p4 = nbrhosts["P4"]['host']
    
        cmd = "sudo ifconfig Ethernet124 down"
        p1.command(cmd)
        cmd = "sudo ifconfig Ethernet4 down"
        p2.command(cmd)
        cmd = "sudo ifconfig Ethernet4 down"
>       p4.command(cmd)

cmd        = 'sudo ifconfig Ethernet4 down'
docker_filename = '/tmp/zebra_case_2_remotelink_down.txt'
duthosts   = [<MultiAsicSonicHost vlab-c-01>]
filename   = 'zebra_case_2_remotelink_down.txt'
logname    = 'zebra_case_2_remotelink_down_running_log.txt'
nbrhosts   = {'P2': <SonicHost VM0104>, 'P3': <SonicHost VM0103>, 'P4': <SonicHost VM0105>, 'PE1': <SonicHost VM0100>, ...}
p1         = <MultiAsicSonicHost vlab-c-01>
p2         = <SonicHost VM0104>
p3         = <SonicHost VM0103>
p4         = <SonicHost VM0105>
pe3        = <SonicHost VM0102>
ptfadapter = <tests.common.plugins.ptfadapter.ptfadapter.PtfTestAdapter testMethod=runTest>
ptfhost    = <tests.common.devices.ptf.PTFHost object at 0x7f84e4625e80>
rand_one_dut_hostname = 'vlab-c-01'
tbinfo     = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}
vm         = 'PE3'

srv6/test_srv6_basic_sanity.py:377: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <SonicHost VM0105>, module_args = ['sudo ifconfig Ethernet4 down']
complex_args = {}
previous_frame = <frame at 0x34c4660, file '/data/sonic-mgmt/tests/srv6/test_srv6_basic_sanity.py', line 377, code test_traffic_check_remote_igp_fail_case>
filename = '/data/sonic-mgmt/tests/srv6/test_srv6_basic_sanity.py'
line_number = 377, function_name = 'test_traffic_check_remote_igp_fail_case'
lines = ['    p4.command(cmd)\n'], index = 0, verbose = True
module_ignore_errors = False, module_async = False

    def _run(self, *module_args, **complex_args):
    
        previous_frame = inspect.currentframe().f_back
        filename, line_number, function_name, lines, index = inspect.getframeinfo(previous_frame)
    
        verbose = complex_args.pop('verbose', True)
    
        if verbose:
            logger.debug(
                "{}::{}#{}: [{}] AnsibleModule::{}, args={}, kwargs={}".format(
                    filename,
                    function_name,
                    line_number,
                    self.hostname,
                    self.module_name,
                    json.dumps(module_args, cls=AnsibleHostBase.CustomEncoder),
                    json.dumps(complex_args, cls=AnsibleHostBase.CustomEncoder)
                )
            )
        else:
            logger.debug(
                "{}::{}#{}: [{}] AnsibleModule::{} executing...".format(
                    filename,
                    function_name,
                    line_number,
                    self.hostname,
                    self.module_name
                )
            )
    
        module_ignore_errors = complex_args.pop('module_ignore_errors', False)
        module_async = complex_args.pop('module_async', False)
    
        if module_async:
            def run_module(module_args, complex_args):
                return self.module(*module_args, **complex_args)[self.hostname]
            pool = ThreadPool()
            result = pool.apply_async(run_module, (module_args, complex_args))
            return pool, result
    
        module_args = json.loads(json.dumps(module_args, cls=AnsibleHostBase.CustomEncoder))
        complex_args = json.loads(json.dumps(complex_args, cls=AnsibleHostBase.CustomEncoder))
        res = self.module(*module_args, **complex_args)[self.hostname]
    
        if verbose:
            logger.debug(
                "{}::{}#{}: [{}] AnsibleModule::{} Result => {}".format(
                    filename,
                    function_name,
                    line_number,
                    self.hostname,
                    self.module_name, json.dumps(res, cls=AnsibleHostBase.CustomEncoder)
                )
            )
        else:
            logger.debug(
                "{}::{}#{}: [{}] AnsibleModule::{} done, is_failed={}, rc={}".format(
                    filename,
                    function_name,
                    line_number,
                    self.hostname,
                    self.module_name,
                    res.is_failed,
                    res.get('rc', None)
                )
            )
    
        if (res.is_failed or 'exception' in res) and not module_ignore_errors:
>           raise RunAnsibleModuleFail("run module {} failed".format(self.module_name), res)
E           tests.common.errors.RunAnsibleModuleFail: run module command failed, Ansible Results =>
E           {"changed": true, "cmd": ["sudo", "ifconfig", "Ethernet4", "down"], "delta": "0:00:00.046933", "end": "2025-12-20 09:25:11.401511", "failed": true, "msg": "non-zero return code", "rc": 255, "start": "2025-12-20 09:25:11.354578", "stderr": "Ethernet4: ERROR while getting interface flags: No such device", "stderr_lines": ["Ethernet4: ERROR while getting interface flags: No such device"], "stdout": "", "stdout_lines": [], "warnings": ["Consider using 'become', 'become_method', and 'become_user' rather than running sudo"]}

complex_args = {}
filename   = '/data/sonic-mgmt/tests/srv6/test_srv6_basic_sanity.py'
function_name = 'test_traffic_check_remote_igp_fail_case'
index      = 0
line_number = 377
lines      = ['    p4.command(cmd)\n']
module_args = ['sudo ifconfig Ethernet4 down']
module_async = False
module_ignore_errors = False
previous_frame = <frame at 0x34c4660, file '/data/sonic-mgmt/tests/srv6/test_srv6_basic_sanity.py', line 377, code test_traffic_check_remote_igp_fail_case>
res        = {'failed': True, 'msg': 'non-zero return code', 'cmd': ['sudo', 'ifconfig', 'Ethernet4', 'down'], 'stdout': '', 'stder...nes': [], 'stderr_lines': ['Ethernet4: ERROR while getting interface flags: No such device'], '_ansible_no_log': False}
self       = <SonicHost VM0105>
verbose    = True

common/devices/base.py:131: RunAnsibleModuleFail
___________________ test_traffic_check_remote_bgp_fail_case ____________________

tbinfo = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}
duthosts = [<MultiAsicSonicHost vlab-c-01>], rand_one_dut_hostname = 'vlab-c-01'
ptfhost = <tests.common.devices.ptf.PTFHost object at 0x7f84e4625e80>
nbrhosts = {'P2': <SonicHost VM0104>, 'P3': <SonicHost VM0103>, 'P4': <SonicHost VM0105>, 'PE1': <SonicHost VM0100>, ...}
ptfadapter = <tests.common.plugins.ptfadapter.ptfadapter.PtfTestAdapter testMethod=runTest>

    def test_traffic_check_remote_bgp_fail_case(tbinfo, duthosts, rand_one_dut_hostname, ptfhost, nbrhosts, ptfadapter):
        filename = "zebra_case_3_remote_peer_down.txt"
        docker_filename = "/tmp/{}".format(filename)
        vm = "PE3"
        pe3 = nbrhosts[vm]['host']
    
        logname = "zebra_case_3_remote_peer_down_running_log.txt"
        # Recording
        recording_fwding_chain(pe3, logname, "Before starting remote PE failure case")
        #
        # Turn on frr debug
        #
        turn_on_off_frr_debug(duthosts, rand_one_dut_hostname, nbrhosts, docker_filename, vm, True)
        #
        # shut down the link between PE1 and P1, P3
        #
        p1 = duthosts[rand_one_dut_hostname]
        pe1 = nbrhosts["PE1"]['host']
        p3 = nbrhosts["P3"]['host']
    
        cmd = "sudo ifconfig Ethernet112 down"
        p1.command(cmd)
        cmd = "sudo ifconfig Ethernet4 down"
>       p3.command(cmd)

cmd        = 'sudo ifconfig Ethernet4 down'
docker_filename = '/tmp/zebra_case_3_remote_peer_down.txt'
duthosts   = [<MultiAsicSonicHost vlab-c-01>]
filename   = 'zebra_case_3_remote_peer_down.txt'
logname    = 'zebra_case_3_remote_peer_down_running_log.txt'
nbrhosts   = {'P2': <SonicHost VM0104>, 'P3': <SonicHost VM0103>, 'P4': <SonicHost VM0105>, 'PE1': <SonicHost VM0100>, ...}
p1         = <MultiAsicSonicHost vlab-c-01>
p3         = <SonicHost VM0103>
pe1        = <SonicHost VM0100>
pe3        = <SonicHost VM0102>
ptfadapter = <tests.common.plugins.ptfadapter.ptfadapter.PtfTestAdapter testMethod=runTest>
ptfhost    = <tests.common.devices.ptf.PTFHost object at 0x7f84e4625e80>
rand_one_dut_hostname = 'vlab-c-01'
tbinfo     = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}
vm         = 'PE3'

srv6/test_srv6_basic_sanity.py:454: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <SonicHost VM0103>, module_args = ['sudo ifconfig Ethernet4 down']
complex_args = {}
previous_frame = <frame at 0x3653a90, file '/data/sonic-mgmt/tests/srv6/test_srv6_basic_sanity.py', line 454, code test_traffic_check_remote_bgp_fail_case>
filename = '/data/sonic-mgmt/tests/srv6/test_srv6_basic_sanity.py'
line_number = 454, function_name = 'test_traffic_check_remote_bgp_fail_case'
lines = ['    p3.command(cmd)\n'], index = 0, verbose = True
module_ignore_errors = False, module_async = False

    def _run(self, *module_args, **complex_args):
    
        previous_frame = inspect.currentframe().f_back
        filename, line_number, function_name, lines, index = inspect.getframeinfo(previous_frame)
    
        verbose = complex_args.pop('verbose', True)
    
        if verbose:
            logger.debug(
                "{}::{}#{}: [{}] AnsibleModule::{}, args={}, kwargs={}".format(
                    filename,
                    function_name,
                    line_number,
                    self.hostname,
                    self.module_name,
                    json.dumps(module_args, cls=AnsibleHostBase.CustomEncoder),
                    json.dumps(complex_args, cls=AnsibleHostBase.CustomEncoder)
                )
            )
        else:
            logger.debug(
                "{}::{}#{}: [{}] AnsibleModule::{} executing...".format(
                    filename,
                    function_name,
                    line_number,
                    self.hostname,
                    self.module_name
                )
            )
    
        module_ignore_errors = complex_args.pop('module_ignore_errors', False)
        module_async = complex_args.pop('module_async', False)
    
        if module_async:
            def run_module(module_args, complex_args):
                return self.module(*module_args, **complex_args)[self.hostname]
            pool = ThreadPool()
            result = pool.apply_async(run_module, (module_args, complex_args))
            return pool, result
    
        module_args = json.loads(json.dumps(module_args, cls=AnsibleHostBase.CustomEncoder))
        complex_args = json.loads(json.dumps(complex_args, cls=AnsibleHostBase.CustomEncoder))
        res = self.module(*module_args, **complex_args)[self.hostname]
    
        if verbose:
            logger.debug(
                "{}::{}#{}: [{}] AnsibleModule::{} Result => {}".format(
                    filename,
                    function_name,
                    line_number,
                    self.hostname,
                    self.module_name, json.dumps(res, cls=AnsibleHostBase.CustomEncoder)
                )
            )
        else:
            logger.debug(
                "{}::{}#{}: [{}] AnsibleModule::{} done, is_failed={}, rc={}".format(
                    filename,
                    function_name,
                    line_number,
                    self.hostname,
                    self.module_name,
                    res.is_failed,
                    res.get('rc', None)
                )
            )
    
        if (res.is_failed or 'exception' in res) and not module_ignore_errors:
>           raise RunAnsibleModuleFail("run module {} failed".format(self.module_name), res)
E           tests.common.errors.RunAnsibleModuleFail: run module command failed, Ansible Results =>
E           {"changed": true, "cmd": ["sudo", "ifconfig", "Ethernet4", "down"], "delta": "0:00:00.044288", "end": "2025-12-20 09:25:26.246692", "failed": true, "msg": "non-zero return code", "rc": 255, "start": "2025-12-20 09:25:26.202404", "stderr": "Ethernet4: ERROR while getting interface flags: No such device", "stderr_lines": ["Ethernet4: ERROR while getting interface flags: No such device"], "stdout": "", "stdout_lines": [], "warnings": ["Consider using 'become', 'become_method', and 'become_user' rather than running sudo"]}

complex_args = {}
filename   = '/data/sonic-mgmt/tests/srv6/test_srv6_basic_sanity.py'
function_name = 'test_traffic_check_remote_bgp_fail_case'
index      = 0
line_number = 454
lines      = ['    p3.command(cmd)\n']
module_args = ['sudo ifconfig Ethernet4 down']
module_async = False
module_ignore_errors = False
previous_frame = <frame at 0x3653a90, file '/data/sonic-mgmt/tests/srv6/test_srv6_basic_sanity.py', line 454, code test_traffic_check_remote_bgp_fail_case>
res        = {'failed': True, 'msg': 'non-zero return code', 'cmd': ['sudo', 'ifconfig', 'Ethernet4', 'down'], 'stdout': '', 'stder...nes': [], 'stderr_lines': ['Ethernet4: ERROR while getting interface flags: No such device'], '_ansible_no_log': False}
self       = <SonicHost VM0103>
verbose    = True

common/devices/base.py:131: RunAnsibleModuleFail
=============================== warnings summary ===============================
common/plugins/loganalyzer/system_msg_handler.py:1
  /data/sonic-mgmt/tests/common/plugins/loganalyzer/system_msg_handler.py:1: DeprecationWarning: invalid escape sequence \ 
    '''

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
------------ generated xml file: /data/sonic-mgmt/tests/logs/tr.xml ------------
=========================== short test summary info ============================
SKIPPED [1] srv6/test_srv6_basic_sanity.py:500: This test is temporarily disabled due to configuration changes.
FAILED srv6/test_srv6_basic_sanity.py::test_interface_on_each_node - Failed: ...
FAILED srv6/test_srv6_basic_sanity.py::test_check_bgp_neighbors - Failed: wai...
FAILED srv6/test_srv6_basic_sanity.py::test_check_routes - Failed: None
FAILED srv6/test_srv6_basic_sanity.py::test_traffic_check_via_trex - Failed: ...
FAILED srv6/test_srv6_basic_sanity.py::test_traffic_check_via_ptf - Exception...
FAILED srv6/test_srv6_basic_sanity.py::test_traffic_check_local_link_fail_case
FAILED srv6/test_srv6_basic_sanity.py::test_traffic_check_remote_igp_fail_case
FAILED srv6/test_srv6_basic_sanity.py::test_traffic_check_remote_bgp_fail_case
============= 8 failed, 1 skipped, 1 warning in 899.97s (0:14:59) ==============
2025-12-20 17:10:12.872611 : Get input param file /root/workspace/PhoenixWingDailySRv6Test/jenkins_665/input_param_json.txt
2025-12-20 17:10:12.872773 : Get file lock for {'user_info': 'PhoenixWing_Daily_SRv6_Test_665'}
2025-12-20 17:10:13.874064 : Found index 1 for action read, user_info PhoenixWing_Daily_SRv6_Test_665
2025-12-20 17:10:13.874109 : Release file lock for {'user_info': 'PhoenixWing_Daily_SRv6_Test_665', 'action': 'read', 'output_vm': {'index': 3, 'user_info': 'PhoenixWing_Daily_SRv6_Test_665'}, 'output_index': 1, 'output_prefix': '192.168.0'}
2025-12-20 17:10:13.874151 : read_vm_reservation : {"user_info": "PhoenixWing_Daily_SRv6_Test_665", "action": "read", "output_vm": {"index": 3, "user_info": "PhoenixWing_Daily_SRv6_Test_665"}, "output_index": 1, "output_prefix": "192.168.0"}
2025-12-20 17:10:13.874310 : ifconfig | grep 30.57.186.111
2025-12-20 17:10:13.876891 : ifconfig | grep 30.57.186.42
2025-12-20 17:10:13.879121 : ifconfig | grep 30.57.186.79
2025-12-20 17:10:13.881268 : ifconfig | grep 30.57.186.80
2025-12-20 17:10:13.883517 : ifconfig | grep 30.57.186.218
2025-12-20 17:10:13.885539 : ifconfig | grep 30.57.186.175
2025-12-20 17:10:13.887659 : ifconfig | grep 11.166.8.106
2025-12-20 17:10:13.889558 : ifconfig | grep 11.166.8.104
2025-12-20 17:10:13.891626 : DEBUG_ARR:         inet 11.166.8.104  netmask 255.255.240.0  broadcast 11.166.15.255
2025-12-20 17:10:13.891654 : Found local server setting forr 11.166.8.104
2025-12-20 17:10:13.891661 : Set local ip as 192.168.0.3
{   'address': '11.166.8.104',
    'host_port': 'eth0',
    'jenkin_node_name': 'Pytest_ECS_104',
    'password': 'Alin00000s!',
    'user': 'root',
    'vm_bridge': 'vmbr0',
    'vm_gw': '192.168.0.1',
    'vmip': '192.168.0.2'}
2025-12-20 17:10:13.891831 : mkdir -p /tmp/local_cache//1766221812.8726053/
Run pytest on 11.166.8.104 vmip 192.168.0.3, vm name _192.168.0.3
Get input topo vms-kvm-ciscovs-7nodes
Get input test case  -c "srv6/test_srv6_basic_sanity.py" 
2025-12-20 17:10:13.893599 : ping 192.168.0.3 -c 2
2025-12-20 17:10:14.921833 : DEBUG_ARR: PING 192.168.0.3 (192.168.0.3) 56(84) bytes of data.
2025-12-20 17:10:14.921870 : DEBUG_ARR: 64 bytes from 192.168.0.3: icmp_seq=1 ttl=64 time=0.700 ms
2025-12-20 17:10:14.921874 : DEBUG_ARR: 64 bytes from 192.168.0.3: icmp_seq=2 ttl=64 time=0.162 ms
2025-12-20 17:10:14.921876 : DEBUG_ARR: 
2025-12-20 17:10:14.921880 : DEBUG_ARR: --- 192.168.0.3 ping statistics ---
2025-12-20 17:10:14.921883 : DEBUG_ARR: 2 packets transmitted, 2 received, 0% packet loss, time 1022ms
2025-12-20 17:10:14.921885 : DEBUG_ARR: rtt min/avg/max/mdev = 0.162/0.431/0.700/0.269 ms
2025-12-20 17:10:14.921904 : ping 192.168.0.3 -c 2
2025-12-20 17:10:15.944656 : DEBUG_ARR: PING 192.168.0.3 (192.168.0.3) 56(84) bytes of data.
2025-12-20 17:10:15.944694 : DEBUG_ARR: 64 bytes from 192.168.0.3: icmp_seq=1 ttl=64 time=0.085 ms
2025-12-20 17:10:15.944697 : DEBUG_ARR: 64 bytes from 192.168.0.3: icmp_seq=2 ttl=64 time=0.310 ms
2025-12-20 17:10:15.944699 : DEBUG_ARR: 
2025-12-20 17:10:15.944702 : DEBUG_ARR: --- 192.168.0.3 ping statistics ---
2025-12-20 17:10:15.944704 : DEBUG_ARR: 2 packets transmitted, 2 received, 0% packet loss, time 1020ms
2025-12-20 17:10:15.944707 : DEBUG_ARR: rtt min/avg/max/mdev = 0.085/0.197/0.310/0.112 ms
2025-12-20 17:10:15.944734 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "docker exec --user ubuntu sonic-mgmt-test bash -c 'ls'"
2025-12-20 17:10:17.433313 : Run sudo monit unmonitor container_checker for range(0, 1)
2025-12-20 17:10:17.433345 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.125 sudo monit unmonitor container_checker'"
2025-12-20 17:10:19.114485 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "sudo setcap cap_net_raw,cap_net_admin=eip /usr/sbin/tcpdump"
2025-12-20 17:10:19.848211 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "sudo chmod 777 /var/run/openvswitch/*"
2025-12-20 17:10:20.551811 : sshpass -p "123" scp   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" /root/workspace/PhoenixWingDailySRv6Test/jenkins_665/input_param_json.txt ubuntu@192.168.0.3:~/
2025-12-20 17:10:21.289312 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "docker exec --user ubuntu sonic-mgmt-test bash -c 'python3 -m venv ~/env-python3 ; source ~/env-python3/bin/activate;  pip install -i https://mirrors.aliyun.com/pypi/simple/  --upgrade \"paramiko>=3.5.1\";  cd /data/sonic-mgmt/tests; ./run_tests.sh -n vms-kvm-ciscovs-7nodes -d vlab-c-01  -c "srv6/test_srv6_basic_sanity.py"  -f vtestbed.yaml -i ../ansible/veos_vtb  -u  -e --skip_sanity -e --disable_loganalyzer -e --neighbor_type=sonic '"
2025-12-20 17:25:28.917601 : Run sudo ls -l  /etc/sonic/frr/* for range(0, 1)
2025-12-20 17:25:28.917633 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.125 sudo ls -l  /etc/sonic/frr/*'"
2025-12-20 17:25:30.298099 : Current directory /root/workspace/PhoenixWingDailySRv6Test/na_lab/pytest_env
{'conf-name': 'vms-kvm-t0', 'group-name': 'vms6-1', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-64', 'group-name': 'vms6-1', 'topo': 't0-64', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-64-32', 'group-name': 'vms6-1', 'topo': 't0-64-32', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t1-lag', 'group-name': 'vms6-2', 'topo': 't1-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-02', 'ptf_ip': '10.250.0.106/24', 'ptf_ipv6': 'fec0::ffff:afa:6/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-03'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-2', 'group-name': 'vms6-3', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-03', 'ptf_ip': '10.250.0.108/24', 'ptf_ipv6': 'fec0::ffff:afa:8/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-04'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dual-t0', 'group-name': 'vms6-4', 'topo': 'dualtor', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-04', 'ptf_ip': '10.250.0.109/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'server': 'server_1', 'vm_base': 'VM0108', 'dut': ['vlab-05', 'vlab-06'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Dual-TOR testbed'}
{'conf-name': 'vms-kvm-multi-asic-t1-lag', 'group-name': 'vms6-4', 'topo': 't1-64-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-05', 'ptf_ip': '10.250.0.110/24', 'ptf_ipv6': 'fec0::ffff:afa:a/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-07'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests multi-asic virtual switch vm'}
 09:25:31 up 36 min,  0 user,  load average: 20.32, 21.70, 19.31
{'conf-name': 'vms-kvm-four-asic-t1-lag', 'group-name': 'vms6-4', 'topo': 't1-8-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-05', 'ptf_ip': '10.250.0.110/24', 'ptf_ipv6': 'fec0::ffff:afa:a/64', 'server': 'server_1', 'vm_base': 'VM0128', 'dut': ['vlab-08'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests multi-asic virtual switch vm'}
{'conf-name': 'vms-kvm-t2', 'group-name': 'vms6-4', 'topo': 't2-vs', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-04', 'ptf_ip': '10.250.0.109/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-t2-01', 'vlab-t2-02', 'vlab-t2-sup'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'T2 Virtual chassis'}
{'conf-name': 'vms-kvm-t0-3', 'group-name': 'vms6-6', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-06', 'ptf_ip': '10.250.0.116/24', 'ptf_ipv6': 'fec0::ffff:afb:2/64', 'server': 'server_1', 'vm_base': 'VM0132', 'dut': ['vlab-09'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-4', 'group-name': 'vms6-7', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-07', 'ptf_ip': '10.250.0.118/24', 'ptf_ipv6': 'fec0::ffff:afb:4/64', 'server': 'server_1', 'vm_base': 'VM0136', 'dut': ['vlab-10'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dual-mixed', 'group-name': 'vms6-8', 'topo': 'dualtor-mixed', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-08', 'ptf_ip': '10.250.0.119/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'netns_mgmt_ip': '10.250.0.126/24', 'server': 'server_1', 'vm_base': 'VM0140', 'dut': ['vlab-11', 'vlab-12'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Dual-TOR-Mixed testbed'}
{'conf-name': '8000e-t0', 'group-name': 'vms8-1', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-8k-01', 'ptf_ip': '10.250.0.202/24', 'ptf_ipv6': 'fec0::ffff:afc:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-8k-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests 8000e sonic device'}
{'conf-name': '8000e-t1', 'group-name': 'vms8-1', 'topo': 't1', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-8k-01', 'ptf_ip': '10.250.0.202/24', 'ptf_ipv6': 'fec0::ffff:afc:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-8k-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests 8000e sonic device'}
{'conf-name': 'vms-kvm-wan-pub', 'group-name': 'vms6-1', 'topo': 'wan-pub', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-4link', 'group-name': 'vms6-1', 'topo': 'wan-4link', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-pub-cisco', 'group-name': 'vms6-1', 'topo': 'wan-pub-cisco', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-2dut', 'group-name': 'vms6-1', 'topo': 'wan-2dut', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01', 'vlab-03'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-3link-tg', 'group-name': 'vms6-1', 'topo': 'wan-3link-tg', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-ecmp', 'group-name': 'vms6-1', 'topo': 'wan-ecmp', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01', 'vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-pub-isis', 'group-name': 'vms6-1', 'topo': 'wan-pub-isis', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dpu', 'group-name': 'vms6-1', 'topo': 'dpu', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': False, 'comment': 'Tests virtual switch vm as DPU'}
{'conf-name': 'vms-kvm-ciscovs-7nodes', 'group-name': 'vms9-1', 'topo': 'ciscovs-7nodes', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-c-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes'}
{'conf-name': 'vms-kvm-ciscovs-5nodes', 'group-name': 'vms9-1', 'topo': 'ciscovs-5nodes', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-c-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 5 nodes'}
2025-12-20 17:25:30.346889 : /root/workspace/PhoenixWingDailySRv6Test/na_lab/pytest_env/../..//sonic-mgmt/ansible/vars/topo_ciscovs-7nodes.yml : vms_yml {'PE1': {'vlans': [28], 'vm_offset': 0}, 'PE2': {'vlans': [29], 'vm_offset': 1}, 'PE3': {'vlans': [30], 'vm_offset': 2}, 'P3': {'vlans': [31], 'vm_offset': 3}, 'P2': {'vlans': [16], 'vm_offset': 4}, 'P4': {'vlans': [17], 'vm_offset': 5}}, len 6
2025-12-20 17:25:30.346948 : Run sudo ls -l  /etc/sonic/frr/* for range(0, 6)
2025-12-20 17:25:30.346957 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.51 sudo ls -l  /etc/sonic/frr/*'"
2025-12-20 17:25:31.656861 : Run uptime for range(0, 1)
2025-12-20 17:25:31.656891 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.125 uptime'"
2025-12-20 17:25:32.759018 : Current directory /root/workspace/PhoenixWingDailySRv6Test/na_lab/pytest_env
{'conf-name': 'vms-kvm-t0', 'group-name': 'vms6-1', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-64', 'group-name': 'vms6-1', 'topo': 't0-64', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-64-32', 'group-name': 'vms6-1', 'topo': 't0-64-32', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t1-lag', 'group-name': 'vms6-2', 'topo': 't1-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-02', 'ptf_ip': '10.250.0.106/24', 'ptf_ipv6': 'fec0::ffff:afa:6/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-03'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-2', 'group-name': 'vms6-3', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-03', 'ptf_ip': '10.250.0.108/24', 'ptf_ipv6': 'fec0::ffff:afa:8/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-04'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dual-t0', 'group-name': 'vms6-4', 'topo': 'dualtor', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-04', 'ptf_ip': '10.250.0.109/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'server': 'server_1', 'vm_base': 'VM0108', 'dut': ['vlab-05', 'vlab-06'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Dual-TOR testbed'}
{'conf-name': 'vms-kvm-multi-asic-t1-lag', 'group-name': 'vms6-4', 'topo': 't1-64-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-05', 'ptf_ip': '10.250.0.110/24', 'ptf_ipv6': 'fec0::ffff:afa:a/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-07'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests multi-asic virtual switch vm'}
{'conf-name': 'vms-kvm-four-asic-t1-lag', 'group-name': 'vms6-4', 'topo': 't1-8-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-05', 'ptf_ip': '10.250.0.110/24', 'ptf_ipv6': 'fec0::ffff:afa:a/64', 'server': 'server_1', 'vm_base': 'VM0128', 'dut': ['vlab-08'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests multi-asic virtual switch vm'}
{'conf-name': 'vms-kvm-t2', 'group-name': 'vms6-4', 'topo': 't2-vs', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-04', 'ptf_ip': '10.250.0.109/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-t2-01', 'vlab-t2-02', 'vlab-t2-sup'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'T2 Virtual chassis'}
{'conf-name': 'vms-kvm-t0-3', 'group-name': 'vms6-6', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-06', 'ptf_ip': '10.250.0.116/24', 'ptf_ipv6': 'fec0::ffff:afb:2/64', 'server': 'server_1', 'vm_base': 'VM0132', 'dut': ['vlab-09'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-4', 'group-name': 'vms6-7', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-07', 'ptf_ip': '10.250.0.118/24', 'ptf_ipv6': 'fec0::ffff:afb:4/64', 'server': 'server_1', 'vm_base': 'VM0136', 'dut': ['vlab-10'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dual-mixed', 'group-name': 'vms6-8', 'topo': 'dualtor-mixed', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-08', 'ptf_ip': '10.250.0.119/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'netns_mgmt_ip': '10.250.0.126/24', 'server': 'server_1', 'vm_base': 'VM0140', 'dut': ['vlab-11', 'vlab-12'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Dual-TOR-Mixed testbed'}
{'conf-name': '8000e-t0', 'group-name': 'vms8-1', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-8k-01', 'ptf_ip': '10.250.0.202/24', 'ptf_ipv6': 'fec0::ffff:afc:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-8k-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests 8000e sonic device'}
{'conf-name': '8000e-t1', 'group-name': 'vms8-1', 'topo': 't1', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-8k-01', 'ptf_ip': '10.250.0.202/24', 'ptf_ipv6': 'fec0::ffff:afc:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-8k-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests 8000e sonic device'}
{'conf-name': 'vms-kvm-wan-pub', 'group-name': 'vms6-1', 'topo': 'wan-pub', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-4link', 'group-name': 'vms6-1', 'topo': 'wan-4link', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-pub-cisco', 'group-name': 'vms6-1', 'topo': 'wan-pub-cisco', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-2dut', 'group-name': 'vms6-1', 'topo': 'wan-2dut', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01', 'vlab-03'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-3link-tg', 'group-name': 'vms6-1', 'topo': 'wan-3link-tg', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-ecmp', 'group-name': 'vms6-1', 'topo': 'wan-ecmp', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01', 'vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-pub-isis', 'group-name': 'vms6-1', 'topo': 'wan-pub-isis', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dpu', 'group-name': 'vms6-1', 'topo': 'dpu', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': False, 'comment': 'Tests virtual switch vm as DPU'}
{'conf-name': 'vms-kvm-ciscovs-7nodes', 'group-name': 'vms9-1', 'topo': 'ciscovs-7nodes', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-c-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes'}
{'conf-name': 'vms-kvm-ciscovs-5nodes', 'group-name': 'vms9-1', 'topo': 'ciscovs-5nodes', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-c-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 5 nodes'}
2025-12-20 17:25:32.806775 : /root/workspace/PhoenixWingDailySRv6Test/na_lab/pytest_env/../..//sonic-mgmt/ansible/vars/topo_ciscovs-7nodes.yml : vms_yml {'PE1': {'vlans': [28], 'vm_offset': 0}, 'PE2': {'vlans': [29], 'vm_offset': 1}, 'PE3': {'vlans': [30], 'vm_offset': 2}, 'P3': {'vlans': [31], 'vm_offset': 3}, 'P2': {'vlans': [16], 'vm_offset': 4}, 'P4': {'vlans': [17], 'vm_offset': 5}}, len 6
2025-12-20 17:25:32.806827 : Run uptime for range(0, 6)
 09:25:33 up 46 min,  0 user,  load average: 20.46, 19.75, 18.83
CONTAINER ID   IMAGE                                COMMAND                  CREATED          STATUS          PORTS     NAMES
4cc483564fa6   docker-snmp:latest                   "/usr/bin/docker-snm…"   29 minutes ago   Up 20 minutes             snmp
fd8a7284be04   docker-platform-monitor:latest       "/usr/bin/docker_ini…"   29 minutes ago   Up 20 minutes             pmon
8d9c802408ae   docker-sonic-mgmt-framework:latest   "/usr/local/bin/supe…"   29 minutes ago   Up 20 minutes             mgmt-framework
627d554e2682   docker-lldp:latest                   "/usr/bin/docker-lld…"   29 minutes ago   Up 20 minutes             lldp
98a8653185e8   docker-sonic-gnmi:latest             "/usr/local/bin/supe…"   29 minutes ago   Up 20 minutes             gnmi
9a37fb52a2e6   docker-router-advertiser:latest      "/usr/bin/docker-ini…"   32 minutes ago   Up 23 minutes             radv
908c7454531d   docker-eventd:latest                 "/usr/local/bin/supe…"   32 minutes ago   Up 23 minutes             eventd
b2104bd6ad59   docker-fpm-frr:latest                "/usr/bin/docker_ini…"   32 minutes ago   Up 23 minutes             bgp
a4e0a8e85b05   docker-syncd-ciscovs:latest          "/usr/bin/docker_ini…"   32 minutes ago   Up 23 minutes             syncd
f1388b8ed356   docker-teamd:latest                  "/usr/local/bin/supe…"   32 minutes ago   Up 23 minutes             teamd
17311cf27807   docker-sysmgr:latest                 "/usr/local/bin/supe…"   32 minutes ago   Up 23 minutes             sysmgr
f98f714e86f3   docker-orchagent:latest              "/usr/bin/docker-ini…"   32 minutes ago   Up 23 minutes             swss
35f4e0ef586b   docker-database:latest               "/usr/local/bin/dock…"   33 minutes ago   Up 33 minutes             database
2025-12-20 17:25:32.806836 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.51 uptime'"
2025-12-20 17:25:33.891772 : Run docker ps for range(0, 1)
2025-12-20 17:25:33.891801 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.125 docker ps'"
2025-12-20 17:25:35.076123 : Current directory /root/workspace/PhoenixWingDailySRv6Test/na_lab/pytest_env
{'conf-name': 'vms-kvm-t0', 'group-name': 'vms6-1', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-64', 'group-name': 'vms6-1', 'topo': 't0-64', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-64-32', 'group-name': 'vms6-1', 'topo': 't0-64-32', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t1-lag', 'group-name': 'vms6-2', 'topo': 't1-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-02', 'ptf_ip': '10.250.0.106/24', 'ptf_ipv6': 'fec0::ffff:afa:6/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-03'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-2', 'group-name': 'vms6-3', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-03', 'ptf_ip': '10.250.0.108/24', 'ptf_ipv6': 'fec0::ffff:afa:8/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-04'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dual-t0', 'group-name': 'vms6-4', 'topo': 'dualtor', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-04', 'ptf_ip': '10.250.0.109/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'server': 'server_1', 'vm_base': 'VM0108', 'dut': ['vlab-05', 'vlab-06'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Dual-TOR testbed'}
{'conf-name': 'vms-kvm-multi-asic-t1-lag', 'group-name': 'vms6-4', 'topo': 't1-64-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-05', 'ptf_ip': '10.250.0.110/24', 'ptf_ipv6': 'fec0::ffff:afa:a/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-07'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests multi-asic virtual switch vm'}
{'conf-name': 'vms-kvm-four-asic-t1-lag', 'group-name': 'vms6-4', 'topo': 't1-8-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-05', 'ptf_ip': '10.250.0.110/24', 'ptf_ipv6': 'fec0::ffff:afa:a/64', 'server': 'server_1', 'vm_base': 'VM0128', 'dut': ['vlab-08'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests multi-asic virtual switch vm'}
{'conf-name': 'vms-kvm-t2', 'group-name': 'vms6-4', 'topo': 't2-vs', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-04', 'ptf_ip': '10.250.0.109/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-t2-01', 'vlab-t2-02', 'vlab-t2-sup'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'T2 Virtual chassis'}
{'conf-name': 'vms-kvm-t0-3', 'group-name': 'vms6-6', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-06', 'ptf_ip': '10.250.0.116/24', 'ptf_ipv6': 'fec0::ffff:afb:2/64', 'server': 'server_1', 'vm_base': 'VM0132', 'dut': ['vlab-09'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-4', 'group-name': 'vms6-7', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-07', 'ptf_ip': '10.250.0.118/24', 'ptf_ipv6': 'fec0::ffff:afb:4/64', 'server': 'server_1', 'vm_base': 'VM0136', 'dut': ['vlab-10'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dual-mixed', 'group-name': 'vms6-8', 'topo': 'dualtor-mixed', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-08', 'ptf_ip': '10.250.0.119/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'netns_mgmt_ip': '10.250.0.126/24', 'server': 'server_1', 'vm_base': 'VM0140', 'dut': ['vlab-11', 'vlab-12'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Dual-TOR-Mixed testbed'}
{'conf-name': '8000e-t0', 'group-name': 'vms8-1', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-8k-01', 'ptf_ip': '10.250.0.202/24', 'ptf_ipv6': 'fec0::ffff:afc:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-8k-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests 8000e sonic device'}
{'conf-name': '8000e-t1', 'group-name': 'vms8-1', 'topo': 't1', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-8k-01', 'ptf_ip': '10.250.0.202/24', 'ptf_ipv6': 'fec0::ffff:afc:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-8k-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests 8000e sonic device'}
{'conf-name': 'vms-kvm-wan-pub', 'group-name': 'vms6-1', 'topo': 'wan-pub', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-4link', 'group-name': 'vms6-1', 'topo': 'wan-4link', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-pub-cisco', 'group-name': 'vms6-1', 'topo': 'wan-pub-cisco', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-2dut', 'group-name': 'vms6-1', 'topo': 'wan-2dut', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01', 'vlab-03'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-3link-tg', 'group-name': 'vms6-1', 'topo': 'wan-3link-tg', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-ecmp', 'group-name': 'vms6-1', 'topo': 'wan-ecmp', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01', 'vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-pub-isis', 'group-name': 'vms6-1', 'topo': 'wan-pub-isis', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
CONTAINER ID   IMAGE                                COMMAND                  CREATED          STATUS          PORTS     NAMES
34c36a77734d   docker-snmp:latest                   "/usr/bin/docker-snm…"   40 minutes ago   Up 4 minutes              snmp
9ef964a467bb   docker-platform-monitor:latest       "/usr/bin/docker_ini…"   41 minutes ago   Up 25 minutes             pmon
414680519e8e   docker-sonic-mgmt-framework:latest   "/usr/local/bin/supe…"   41 minutes ago   Up 25 minutes             mgmt-framework
a33371424e3a   docker-lldp:latest                   "/usr/bin/docker-lld…"   41 minutes ago   Up 25 minutes             lldp
8bcea766198d   docker-sonic-gnmi:latest             "/usr/local/bin/supe…"   41 minutes ago   Up 25 minutes             gnmi
5ca19af4f850   docker-router-advertiser:latest      "/usr/bin/docker-ini…"   45 minutes ago   Up 5 minutes              radv
ad50b5ea0521   docker-syncd-ciscovs:latest          "/usr/bin/docker_ini…"   45 minutes ago   Up 5 minutes              syncd
d63740093975   docker-teamd:latest                  "/usr/local/bin/supe…"   45 minutes ago   Up 5 minutes              teamd
a9e27bd88618   docker-fpm-frr:latest                "/usr/bin/docker_ini…"   45 minutes ago   Up 5 minutes              bgp
bcbe526041f7   docker-sysmgr:latest                 "/usr/local/bin/supe…"   45 minutes ago   Up 28 minutes             sysmgr
9f801239f2d9   docker-eventd:latest                 "/usr/local/bin/supe…"   45 minutes ago   Up 28 minutes             eventd
9eace2ed16c9   docker-orchagent:latest              "/usr/bin/docker-ini…"   45 minutes ago   Up 5 minutes              swss
418a75fe94f7   docker-database:latest               "/usr/local/bin/dock…"   45 minutes ago   Up 45 minutes             database

SONiC Software Version: SONiC.phoenixwing_08192025.391-dirty-20251219.084725
SONiC OS Version: 12
Distribution: Debian 12.12
Kernel: 6.1.0-29-2-amd64
Build commit: 885eae54f
Build date: Fri Dec 19 09:57:19 UTC 2025
Built by: joy@joy

Platform: x86_64-kvm_x86_64-r0
HwSKU: cisco-8101-p4-32x100-vs
ASIC: cisco-ngdp-vs
ASIC Count: 1
Serial Number: N/A
Model Number: N/A
Hardware Revision: N/A
Uptime: 09:25:36 up 36 min,  0 user,  load average: 20.86, 21.78, 19.35
Date: Sat 20 Dec 2025 09:25:36

Docker images:
REPOSITORY                    TAG                                              IMAGE ID       SIZE
docker-macsec                 latest                                           874c1eefd246   319MB
docker-macsec                 phoenixwing_08192025.391-dirty-20251219.084725   874c1eefd246   319MB
docker-dhcp-relay             latest                                           a07908f87fa6   295MB
docker-dhcp-relay             phoenixwing_08192025.391-dirty-20251219.084725   a07908f87fa6   295MB
docker-teamd                  latest                                           ad76ff0a44bd   316MB
docker-teamd                  phoenixwing_08192025.391-dirty-20251219.084725   ad76ff0a44bd   316MB
docker-sysmgr                 latest                                           eccf919e9436   298MB
docker-sysmgr                 phoenixwing_08192025.391-dirty-20251219.084725   eccf919e9436   298MB
docker-sonic-mgmt-framework   latest                                           ae33b0f06446   380MB
docker-sonic-mgmt-framework   phoenixwing_08192025.391-dirty-20251219.084725   ae33b0f06446   380MB
docker-snmp                   latest                                           52ff50b68567   311MB
docker-snmp                   phoenixwing_08192025.391-dirty-20251219.084725   52ff50b68567   311MB
docker-sflow                  latest                                           549ec52de645   317MB
docker-sflow                  phoenixwing_08192025.391-dirty-20251219.084725   549ec52de645   317MB
docker-router-advertiser      latest                                           3b686318ee07   286MB
docker-router-advertiser      phoenixwing_08192025.391-dirty-20251219.084725   3b686318ee07   286MB
docker-platform-monitor       latest                                           473732caa65d   420MB
docker-platform-monitor       phoenixwing_08192025.391-dirty-20251219.084725   473732caa65d   420MB
docker-orchagent              latest                                           9f64f043be36   328MB
docker-orchagent              phoenixwing_08192025.391-dirty-20251219.084725   9f64f043be36   328MB
docker-nat                    latest                                           9280e2a0bb73   319MB
docker-nat                    phoenixwing_08192025.391-dirty-20251219.084725   9280e2a0bb73   319MB
docker-mux                    latest                                           8ba7c1cc9b34   338MB
docker-mux                    phoenixwing_08192025.391-dirty-20251219.084725   8ba7c1cc9b34   338MB
docker-lldp                   latest                                           392df2386d9f   332MB
docker-lldp                   phoenixwing_08192025.391-dirty-20251219.084725   392df2386d9f   332MB
docker-sonic-gnmi             latest                                           072c183ae853   402MB
docker-sonic-gnmi             phoenixwing_08192025.391-dirty-20251219.084725   072c183ae853   402MB
docker-gnmi-watchdog          latest                                           f8a005034c76   294MB
docker-gnmi-watchdog          phoenixwing_08192025.391-dirty-20251219.084725   f8a005034c76   294MB
docker-fpm-frr                latest                                           ae73654f0677   365MB
docker-fpm-frr                phoenixwing_08192025.391-dirty-20251219.084725   ae73654f0677   365MB
docker-eventd                 latest                                           d5dd7407c98f   286MB
docker-eventd                 phoenixwing_08192025.391-dirty-20251219.084725   d5dd7407c98f   286MB
docker-database               latest                                           35cafd1c7a33   299MB
docker-database               phoenixwing_08192025.391-dirty-20251219.084725   35cafd1c7a33   299MB
docker-sonic-bmp              latest                                           b4e5693a9f8d   288MB
docker-sonic-bmp              phoenixwing_08192025.391-dirty-20251219.084725   b4e5693a9f8d   288MB
docker-bmp-watchdog           latest                                           66da34b1bd6c   286MB
docker-bmp-watchdog           phoenixwing_08192025.391-dirty-20251219.084725   66da34b1bd6c   286MB
docker-auditd                 latest                                           b2874a39b213   286MB
docker-auditd                 phoenixwing_08192025.391-dirty-20251219.084725   b2874a39b213   286MB
docker-auditd-watchdog        latest                                           c8075e061a0d   289MB
docker-auditd-watchdog        phoenixwing_08192025.391-dirty-20251219.084725   c8075e061a0d   289MB
docker-syncd-ciscovs          latest                                           b2adb363cb2a   1.26GB
docker-syncd-ciscovs          phoenixwing_08192025.391-dirty-20251219.084725   b2adb363cb2a   1.26GB

{'conf-name': 'vms-kvm-dpu', 'group-name': 'vms6-1', 'topo': 'dpu', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': False, 'comment': 'Tests virtual switch vm as DPU'}
{'conf-name': 'vms-kvm-ciscovs-7nodes', 'group-name': 'vms9-1', 'topo': 'ciscovs-7nodes', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-c-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes'}
{'conf-name': 'vms-kvm-ciscovs-5nodes', 'group-name': 'vms9-1', 'topo': 'ciscovs-5nodes', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-c-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 5 nodes'}
2025-12-20 17:25:35.123478 : /root/workspace/PhoenixWingDailySRv6Test/na_lab/pytest_env/../..//sonic-mgmt/ansible/vars/topo_ciscovs-7nodes.yml : vms_yml {'PE1': {'vlans': [28], 'vm_offset': 0}, 'PE2': {'vlans': [29], 'vm_offset': 1}, 'PE3': {'vlans': [30], 'vm_offset': 2}, 'P3': {'vlans': [31], 'vm_offset': 3}, 'P2': {'vlans': [16], 'vm_offset': 4}, 'P4': {'vlans': [17], 'vm_offset': 5}}, len 6
2025-12-20 17:25:35.123524 : Run docker ps for range(0, 6)
2025-12-20 17:25:35.123533 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.51 docker ps'"
2025-12-20 17:25:36.204207 : Run show version for range(0, 1)
2025-12-20 17:25:36.204236 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.125 show version'"
2025-12-20 17:25:38.064338 : Current directory /root/workspace/PhoenixWingDailySRv6Test/na_lab/pytest_env
{'conf-name': 'vms-kvm-t0', 'group-name': 'vms6-1', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-64', 'group-name': 'vms6-1', 'topo': 't0-64', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-64-32', 'group-name': 'vms6-1', 'topo': 't0-64-32', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t1-lag', 'group-name': 'vms6-2', 'topo': 't1-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-02', 'ptf_ip': '10.250.0.106/24', 'ptf_ipv6': 'fec0::ffff:afa:6/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-03'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-2', 'group-name': 'vms6-3', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-03', 'ptf_ip': '10.250.0.108/24', 'ptf_ipv6': 'fec0::ffff:afa:8/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-04'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dual-t0', 'group-name': 'vms6-4', 'topo': 'dualtor', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-04', 'ptf_ip': '10.250.0.109/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'server': 'server_1', 'vm_base': 'VM0108', 'dut': ['vlab-05', 'vlab-06'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Dual-TOR testbed'}
{'conf-name': 'vms-kvm-multi-asic-t1-lag', 'group-name': 'vms6-4', 'topo': 't1-64-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-05', 'ptf_ip': '10.250.0.110/24', 'ptf_ipv6': 'fec0::ffff:afa:a/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-07'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests multi-asic virtual switch vm'}
{'conf-name': 'vms-kvm-four-asic-t1-lag', 'group-name': 'vms6-4', 'topo': 't1-8-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-05', 'ptf_ip': '10.250.0.110/24', 'ptf_ipv6': 'fec0::ffff:afa:a/64', 'server': 'server_1', 'vm_base': 'VM0128', 'dut': ['vlab-08'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests multi-asic virtual switch vm'}
{'conf-name': 'vms-kvm-t2', 'group-name': 'vms6-4', 'topo': 't2-vs', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-04', 'ptf_ip': '10.250.0.109/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-t2-01', 'vlab-t2-02', 'vlab-t2-sup'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'T2 Virtual chassis'}
{'conf-name': 'vms-kvm-t0-3', 'group-name': 'vms6-6', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-06', 'ptf_ip': '10.250.0.116/24', 'ptf_ipv6': 'fec0::ffff:afb:2/64', 'server': 'server_1', 'vm_base': 'VM0132', 'dut': ['vlab-09'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-4', 'group-name': 'vms6-7', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-07', 'ptf_ip': '10.250.0.118/24', 'ptf_ipv6': 'fec0::ffff:afb:4/64', 'server': 'server_1', 'vm_base': 'VM0136', 'dut': ['vlab-10'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dual-mixed', 'group-name': 'vms6-8', 'topo': 'dualtor-mixed', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-08', 'ptf_ip': '10.250.0.119/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'netns_mgmt_ip': '10.250.0.126/24', 'server': 'server_1', 'vm_base': 'VM0140', 'dut': ['vlab-11', 'vlab-12'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Dual-TOR-Mixed testbed'}
{'conf-name': '8000e-t0', 'group-name': 'vms8-1', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-8k-01', 'ptf_ip': '10.250.0.202/24', 'ptf_ipv6': 'fec0::ffff:afc:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-8k-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests 8000e sonic device'}
{'conf-name': '8000e-t1', 'group-name': 'vms8-1', 'topo': 't1', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-8k-01', 'ptf_ip': '10.250.0.202/24', 'ptf_ipv6': 'fec0::ffff:afc:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-8k-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests 8000e sonic device'}
{'conf-name': 'vms-kvm-wan-pub', 'group-name': 'vms6-1', 'topo': 'wan-pub', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-4link', 'group-name': 'vms6-1', 'topo': 'wan-4link', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-pub-cisco', 'group-name': 'vms6-1', 'topo': 'wan-pub-cisco', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}

SONiC Software Version: SONiC.phoenixwing_08192025.391-dirty-20251219.084725
SONiC OS Version: 12
Distribution: Debian 12.12
Kernel: 6.1.0-29-2-amd64
Build commit: 885eae54f
Build date: Fri Dec 19 09:57:19 UTC 2025
Built by: joy@joy

Platform: x86_64-kvm_x86_64-r0
HwSKU: cisco-8101-p4-32x100-vs
ASIC: cisco-ngdp-vs
ASIC Count: 1
Serial Number: N/A
Model Number: N/A
Hardware Revision: N/A
Uptime: 09:25:39 up 46 min,  0 user,  load average: 21.87, 20.05, 18.93
Date: Sat 20 Dec 2025 09:25:39

Docker images:
REPOSITORY                    TAG                                              IMAGE ID       SIZE
docker-macsec                 latest                                           874c1eefd246   319MB
docker-macsec                 phoenixwing_08192025.391-dirty-20251219.084725   874c1eefd246   319MB
docker-dhcp-relay             latest                                           a07908f87fa6   295MB
docker-dhcp-relay             phoenixwing_08192025.391-dirty-20251219.084725   a07908f87fa6   295MB
docker-teamd                  latest                                           ad76ff0a44bd   316MB
docker-teamd                  phoenixwing_08192025.391-dirty-20251219.084725   ad76ff0a44bd   316MB
docker-sysmgr                 latest                                           eccf919e9436   298MB
docker-sysmgr                 phoenixwing_08192025.391-dirty-20251219.084725   eccf919e9436   298MB
docker-sonic-mgmt-framework   latest                                           ae33b0f06446   380MB
docker-sonic-mgmt-framework   phoenixwing_08192025.391-dirty-20251219.084725   ae33b0f06446   380MB
docker-snmp                   latest                                           52ff50b68567   311MB
docker-snmp                   phoenixwing_08192025.391-dirty-20251219.084725   52ff50b68567   311MB
docker-sflow                  latest                                           549ec52de645   317MB
docker-sflow                  phoenixwing_08192025.391-dirty-20251219.084725   549ec52de645   317MB
docker-router-advertiser      latest                                           3b686318ee07   286MB
docker-router-advertiser      phoenixwing_08192025.391-dirty-20251219.084725   3b686318ee07   286MB
docker-platform-monitor       latest                                           473732caa65d   420MB
docker-platform-monitor       phoenixwing_08192025.391-dirty-20251219.084725   473732caa65d   420MB
docker-orchagent              latest                                           9f64f043be36   328MB
docker-orchagent              phoenixwing_08192025.391-dirty-20251219.084725   9f64f043be36   328MB
docker-nat                    latest                                           9280e2a0bb73   319MB
docker-nat                    phoenixwing_08192025.391-dirty-20251219.084725   9280e2a0bb73   319MB
docker-mux                    latest                                           8ba7c1cc9b34   338MB
docker-mux                    phoenixwing_08192025.391-dirty-20251219.084725   8ba7c1cc9b34   338MB
docker-lldp                   latest                                           392df2386d9f   332MB
docker-lldp                   phoenixwing_08192025.391-dirty-20251219.084725   392df2386d9f   332MB
docker-sonic-gnmi             latest                                           072c183ae853   402MB
docker-sonic-gnmi             phoenixwing_08192025.391-dirty-20251219.084725   072c183ae853   402MB
docker-gnmi-watchdog          latest                                           f8a005034c76   294MB
docker-gnmi-watchdog          phoenixwing_08192025.391-dirty-20251219.084725   f8a005034c76   294MB
docker-fpm-frr                latest                                           ae73654f0677   365MB
docker-fpm-frr                phoenixwing_08192025.391-dirty-20251219.084725   ae73654f0677   365MB
docker-eventd                 latest                                           d5dd7407c98f   286MB
docker-eventd                 phoenixwing_08192025.391-dirty-20251219.084725   d5dd7407c98f   286MB
docker-database               latest                                           35cafd1c7a33   299MB
docker-database               phoenixwing_08192025.391-dirty-20251219.084725   35cafd1c7a33   299MB
docker-sonic-bmp              latest                                           b4e5693a9f8d   288MB
docker-sonic-bmp              phoenixwing_08192025.391-dirty-20251219.084725   b4e5693a9f8d   288MB
docker-bmp-watchdog           latest                                           66da34b1bd6c   286MB
docker-bmp-watchdog           phoenixwing_08192025.391-dirty-20251219.084725   66da34b1bd6c   286MB
docker-auditd                 latest                                           b2874a39b213   286MB
docker-auditd                 phoenixwing_08192025.391-dirty-20251219.084725   b2874a39b213   286MB
docker-auditd-watchdog        latest                                           c8075e061a0d   289MB
docker-auditd-watchdog        phoenixwing_08192025.391-dirty-20251219.084725   c8075e061a0d   289MB
docker-syncd-ciscovs          latest                                           b2adb363cb2a   1.26GB
docker-syncd-ciscovs          phoenixwing_08192025.391-dirty-20251219.084725   b2adb363cb2a   1.26GB

  Interface                Lanes    Speed    MTU    FEC        Alias    Vlan    Oper    Admin    Type    Asym PFC
-----------  -------------------  -------  -----  -----  -----------  ------  ------  -------  ------  ----------
  Ethernet0  2304,2305,2306,2307     100G   9100    N/A    Ethernet0  routed      up       up     N/A         N/A
  Ethernet4  2320,2321,2322,2323     100G   9100    N/A    Ethernet4  routed      up       up     N/A         N/A
  Ethernet8  2312,2313,2314,2315     100G   9100    N/A    Ethernet8  routed      up       up     N/A         N/A
 Ethernet12  2056,2057,2058,2059     100G   9100    N/A   Ethernet12  routed      up       up     N/A         N/A
 Ethernet16  1792,1793,1794,1795     100G   9100    N/A   Ethernet16  routed      up       up     N/A         N/A
 Ethernet20  2048,2049,2050,2051     100G   9100    N/A   Ethernet20  routed      up       up     N/A         N/A
 Ethernet24  2560,2561,2562,2563     100G   9100    N/A   Ethernet24  routed      up       up     N/A         N/A
 Ethernet28  2824,2825,2826,2827     100G   9100    N/A   Ethernet28  routed      up       up     N/A         N/A
 Ethernet32  2832,2833,2834,2835     100G   9100    N/A   Ethernet32  routed      up       up     N/A         N/A
 Ethernet36  2816,2817,2818,2819     100G   9100    N/A   Ethernet36  routed      up       up     N/A         N/A
 Ethernet40  2568,2569,2570,2571     100G   9100    N/A   Ethernet40  routed      up       up     N/A         N/A
 Ethernet44  2576,2577,2578,2579     100G   9100    N/A   Ethernet44  routed      up       up     N/A         N/A
 Ethernet48  1536,1537,1538,1539     100G   9100    N/A   Ethernet48  routed      up       up     N/A         N/A
 Ethernet52  1800,1801,1802,1803     100G   9100    N/A   Ethernet52  routed      up       up     N/A         N/A
 Ethernet56  1552,1553,1554,1555     100G   9100    N/A   Ethernet56  routed      up       up     N/A         N/A
 Ethernet60  1544,1545,1546,1547     100G   9100    N/A   Ethernet60  routed      up       up     N/A         N/A
 Ethernet64  1296,1297,1298,1299     100G   9100    N/A   Ethernet64  routed      up       up     N/A         N/A
 Ethernet68  1288,1289,1290,1291     100G   9100    N/A   Ethernet68  routed      up       up     N/A         N/A
 Ethernet72  1280,1281,1282,1283     100G   9100    N/A   Ethernet72  routed      up       up     N/A         N/A
 Ethernet76  1032,1033,1034,1035     100G   9100    N/A   Ethernet76  routed      up       up     N/A         N/A
 Ethernet80      264,265,266,267     100G   9100    N/A   Ethernet80  routed      up       up     N/A         N/A
 Ethernet84      272,273,274,275     100G   9100    N/A   Ethernet84  routed      up       up     N/A         N/A
 Ethernet88          16,17,18,19     100G   9100    N/A   Ethernet88  routed      up       up     N/A         N/A
 Ethernet92              0,1,2,3     100G   9100    N/A   Ethernet92  routed      up       up     N/A         N/A
 Ethernet96      256,257,258,259     100G   9100    N/A   Ethernet96  routed      up       up     N/A         N/A
Ethernet100            8,9,10,11     100G   9100    N/A  Ethernet100  routed      up       up     N/A         N/A
Ethernet104  1024,1025,1026,1027     100G   9100    N/A  Ethernet104  routed      up       up     N/A         N/A
Ethernet108      768,769,770,771     100G   9100    N/A  Ethernet108  routed      up       up     N/A         N/A
Ethernet112      524,525,526,527     100G   9100    N/A  Ethernet112  routed      up       up     N/A         N/A
Ethernet116      776,777,778,779     100G   9100    N/A  Ethernet116  routed      up       up     N/A         N/A
Ethernet120      516,517,518,519     100G   9100    N/A  Ethernet120  routed      up       up     N/A         N/A
Ethernet124      528,529,530,531     100G   9100    N/A  Ethernet124  routed      up       up     N/A         N/A
{'conf-name': 'vms-kvm-wan-2dut', 'group-name': 'vms6-1', 'topo': 'wan-2dut', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01', 'vlab-03'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-3link-tg', 'group-name': 'vms6-1', 'topo': 'wan-3link-tg', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-ecmp', 'group-name': 'vms6-1', 'topo': 'wan-ecmp', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01', 'vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-pub-isis', 'group-name': 'vms6-1', 'topo': 'wan-pub-isis', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dpu', 'group-name': 'vms6-1', 'topo': 'dpu', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': False, 'comment': 'Tests virtual switch vm as DPU'}
{'conf-name': 'vms-kvm-ciscovs-7nodes', 'group-name': 'vms9-1', 'topo': 'ciscovs-7nodes', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-c-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes'}
{'conf-name': 'vms-kvm-ciscovs-5nodes', 'group-name': 'vms9-1', 'topo': 'ciscovs-5nodes', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-c-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 5 nodes'}
2025-12-20 17:25:38.112216 : /root/workspace/PhoenixWingDailySRv6Test/na_lab/pytest_env/../..//sonic-mgmt/ansible/vars/topo_ciscovs-7nodes.yml : vms_yml {'PE1': {'vlans': [28], 'vm_offset': 0}, 'PE2': {'vlans': [29], 'vm_offset': 1}, 'PE3': {'vlans': [30], 'vm_offset': 2}, 'P3': {'vlans': [31], 'vm_offset': 3}, 'P2': {'vlans': [16], 'vm_offset': 4}, 'P4': {'vlans': [17], 'vm_offset': 5}}, len 6
2025-12-20 17:25:38.112272 : Run show version for range(0, 6)
2025-12-20 17:25:38.112281 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.51 show version'"
2025-12-20 17:25:39.808369 : Run show interface status for range(0, 1)
2025-12-20 17:25:39.808407 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.125 show interface status'"
2025-12-20 17:25:42.406183 : Current directory /root/workspace/PhoenixWingDailySRv6Test/na_lab/pytest_env
{'conf-name': 'vms-kvm-t0', 'group-name': 'vms6-1', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-64', 'group-name': 'vms6-1', 'topo': 't0-64', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-64-32', 'group-name': 'vms6-1', 'topo': 't0-64-32', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t1-lag', 'group-name': 'vms6-2', 'topo': 't1-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-02', 'ptf_ip': '10.250.0.106/24', 'ptf_ipv6': 'fec0::ffff:afa:6/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-03'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-2', 'group-name': 'vms6-3', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-03', 'ptf_ip': '10.250.0.108/24', 'ptf_ipv6': 'fec0::ffff:afa:8/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-04'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dual-t0', 'group-name': 'vms6-4', 'topo': 'dualtor', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-04', 'ptf_ip': '10.250.0.109/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'server': 'server_1', 'vm_base': 'VM0108', 'dut': ['vlab-05', 'vlab-06'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Dual-TOR testbed'}
{'conf-name': 'vms-kvm-multi-asic-t1-lag', 'group-name': 'vms6-4', 'topo': 't1-64-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-05', 'ptf_ip': '10.250.0.110/24', 'ptf_ipv6': 'fec0::ffff:afa:a/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-07'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests multi-asic virtual switch vm'}
{'conf-name': 'vms-kvm-four-asic-t1-lag', 'group-name': 'vms6-4', 'topo': 't1-8-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-05', 'ptf_ip': '10.250.0.110/24', 'ptf_ipv6': 'fec0::ffff:afa:a/64', 'server': 'server_1', 'vm_base': 'VM0128', 'dut': ['vlab-08'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests multi-asic virtual switch vm'}
{'conf-name': 'vms-kvm-t2', 'group-name': 'vms6-4', 'topo': 't2-vs', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-04', 'ptf_ip': '10.250.0.109/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-t2-01', 'vlab-t2-02', 'vlab-t2-sup'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'T2 Virtual chassis'}
{'conf-name': 'vms-kvm-t0-3', 'group-name': 'vms6-6', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-06', 'ptf_ip': '10.250.0.116/24', 'ptf_ipv6': 'fec0::ffff:afb:2/64', 'server': 'server_1', 'vm_base': 'VM0132', 'dut': ['vlab-09'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-4', 'group-name': 'vms6-7', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-07', 'ptf_ip': '10.250.0.118/24', 'ptf_ipv6': 'fec0::ffff:afb:4/64', 'server': 'server_1', 'vm_base': 'VM0136', 'dut': ['vlab-10'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dual-mixed', 'group-name': 'vms6-8', 'topo': 'dualtor-mixed', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-08', 'ptf_ip': '10.250.0.119/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'netns_mgmt_ip': '10.250.0.126/24', 'server': 'server_1', 'vm_base': 'VM0140', 'dut': ['vlab-11', 'vlab-12'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Dual-TOR-Mixed testbed'}
  Interface    Lanes    Speed    MTU    FEC    Alias    Vlan    Oper    Admin    Type    Asym PFC
-----------  -------  -------  -----  -----  -------  ------  ------  -------  ------  ----------
Interface    Master    IPv4 address/mask    Admin/Oper    BGP Neighbor    Neighbor IP
-----------  --------  -------------------  ------------  --------------  -------------
Loopback0              100.1.0.35/32        up/up         N/A             N/A
docker0                240.127.1.1/24       up/down       N/A             N/A
eth0                   10.250.0.125/24      up/up         N/A             N/A
lo                     127.0.0.1/16         up/up         N/A             N/A
{'conf-name': '8000e-t0', 'group-name': 'vms8-1', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-8k-01', 'ptf_ip': '10.250.0.202/24', 'ptf_ipv6': 'fec0::ffff:afc:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-8k-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests 8000e sonic device'}
{'conf-name': '8000e-t1', 'group-name': 'vms8-1', 'topo': 't1', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-8k-01', 'ptf_ip': '10.250.0.202/24', 'ptf_ipv6': 'fec0::ffff:afc:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-8k-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests 8000e sonic device'}
{'conf-name': 'vms-kvm-wan-pub', 'group-name': 'vms6-1', 'topo': 'wan-pub', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-4link', 'group-name': 'vms6-1', 'topo': 'wan-4link', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-pub-cisco', 'group-name': 'vms6-1', 'topo': 'wan-pub-cisco', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-2dut', 'group-name': 'vms6-1', 'topo': 'wan-2dut', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01', 'vlab-03'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-3link-tg', 'group-name': 'vms6-1', 'topo': 'wan-3link-tg', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-ecmp', 'group-name': 'vms6-1', 'topo': 'wan-ecmp', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01', 'vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-pub-isis', 'group-name': 'vms6-1', 'topo': 'wan-pub-isis', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dpu', 'group-name': 'vms6-1', 'topo': 'dpu', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': False, 'comment': 'Tests virtual switch vm as DPU'}
{'conf-name': 'vms-kvm-ciscovs-7nodes', 'group-name': 'vms9-1', 'topo': 'ciscovs-7nodes', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-c-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes'}
{'conf-name': 'vms-kvm-ciscovs-5nodes', 'group-name': 'vms9-1', 'topo': 'ciscovs-5nodes', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-c-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 5 nodes'}
2025-12-20 17:25:42.453002 : /root/workspace/PhoenixWingDailySRv6Test/na_lab/pytest_env/../..//sonic-mgmt/ansible/vars/topo_ciscovs-7nodes.yml : vms_yml {'PE1': {'vlans': [28], 'vm_offset': 0}, 'PE2': {'vlans': [29], 'vm_offset': 1}, 'PE3': {'vlans': [30], 'vm_offset': 2}, 'P3': {'vlans': [31], 'vm_offset': 3}, 'P2': {'vlans': [16], 'vm_offset': 4}, 'P4': {'vlans': [17], 'vm_offset': 5}}, len 6
2025-12-20 17:25:42.453057 : Run show interface status for range(0, 6)
2025-12-20 17:25:42.453067 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.51 show interface status'"
2025-12-20 17:25:44.213008 : Run show ip interface for range(0, 1)
2025-12-20 17:25:44.213043 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.125 show ip interface'"
2025-12-20 17:25:46.475744 : Current directory /root/workspace/PhoenixWingDailySRv6Test/na_lab/pytest_env
{'conf-name': 'vms-kvm-t0', 'group-name': 'vms6-1', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-64', 'group-name': 'vms6-1', 'topo': 't0-64', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-64-32', 'group-name': 'vms6-1', 'topo': 't0-64-32', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t1-lag', 'group-name': 'vms6-2', 'topo': 't1-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-02', 'ptf_ip': '10.250.0.106/24', 'ptf_ipv6': 'fec0::ffff:afa:6/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-03'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-2', 'group-name': 'vms6-3', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-03', 'ptf_ip': '10.250.0.108/24', 'ptf_ipv6': 'fec0::ffff:afa:8/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-04'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dual-t0', 'group-name': 'vms6-4', 'topo': 'dualtor', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-04', 'ptf_ip': '10.250.0.109/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'server': 'server_1', 'vm_base': 'VM0108', 'dut': ['vlab-05', 'vlab-06'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Dual-TOR testbed'}
{'conf-name': 'vms-kvm-multi-asic-t1-lag', 'group-name': 'vms6-4', 'topo': 't1-64-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-05', 'ptf_ip': '10.250.0.110/24', 'ptf_ipv6': 'fec0::ffff:afa:a/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-07'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests multi-asic virtual switch vm'}
Interface    Master    IPv4 address/mask    Admin/Oper    BGP Neighbor    Neighbor IP
-----------  --------  -------------------  ------------  --------------  -------------
Loopback0              100.1.0.29/32        up/up         N/A             N/A
docker0                240.127.1.1/24       up/down       N/A             N/A
eth0                   10.250.0.51/24       up/up         N/A             N/A
lo                     127.0.0.1/16         up/up         N/A             N/A
Flags: A - active, I - inactive, Up - up, Dw - Down, N/A - not available,
       S - selected, D - deselected, * - not synced
No.    Team Dev    Protocol    Ports
-----  ----------  ----------  -------
{'conf-name': 'vms-kvm-four-asic-t1-lag', 'group-name': 'vms6-4', 'topo': 't1-8-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-05', 'ptf_ip': '10.250.0.110/24', 'ptf_ipv6': 'fec0::ffff:afa:a/64', 'server': 'server_1', 'vm_base': 'VM0128', 'dut': ['vlab-08'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests multi-asic virtual switch vm'}
{'conf-name': 'vms-kvm-t2', 'group-name': 'vms6-4', 'topo': 't2-vs', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-04', 'ptf_ip': '10.250.0.109/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-t2-01', 'vlab-t2-02', 'vlab-t2-sup'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'T2 Virtual chassis'}
{'conf-name': 'vms-kvm-t0-3', 'group-name': 'vms6-6', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-06', 'ptf_ip': '10.250.0.116/24', 'ptf_ipv6': 'fec0::ffff:afb:2/64', 'server': 'server_1', 'vm_base': 'VM0132', 'dut': ['vlab-09'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-4', 'group-name': 'vms6-7', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-07', 'ptf_ip': '10.250.0.118/24', 'ptf_ipv6': 'fec0::ffff:afb:4/64', 'server': 'server_1', 'vm_base': 'VM0136', 'dut': ['vlab-10'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dual-mixed', 'group-name': 'vms6-8', 'topo': 'dualtor-mixed', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-08', 'ptf_ip': '10.250.0.119/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'netns_mgmt_ip': '10.250.0.126/24', 'server': 'server_1', 'vm_base': 'VM0140', 'dut': ['vlab-11', 'vlab-12'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Dual-TOR-Mixed testbed'}
{'conf-name': '8000e-t0', 'group-name': 'vms8-1', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-8k-01', 'ptf_ip': '10.250.0.202/24', 'ptf_ipv6': 'fec0::ffff:afc:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-8k-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests 8000e sonic device'}
{'conf-name': '8000e-t1', 'group-name': 'vms8-1', 'topo': 't1', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-8k-01', 'ptf_ip': '10.250.0.202/24', 'ptf_ipv6': 'fec0::ffff:afc:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-8k-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests 8000e sonic device'}
{'conf-name': 'vms-kvm-wan-pub', 'group-name': 'vms6-1', 'topo': 'wan-pub', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-4link', 'group-name': 'vms6-1', 'topo': 'wan-4link', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-pub-cisco', 'group-name': 'vms6-1', 'topo': 'wan-pub-cisco', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-2dut', 'group-name': 'vms6-1', 'topo': 'wan-2dut', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01', 'vlab-03'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-3link-tg', 'group-name': 'vms6-1', 'topo': 'wan-3link-tg', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-ecmp', 'group-name': 'vms6-1', 'topo': 'wan-ecmp', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01', 'vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-pub-isis', 'group-name': 'vms6-1', 'topo': 'wan-pub-isis', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dpu', 'group-name': 'vms6-1', 'topo': 'dpu', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': False, 'comment': 'Tests virtual switch vm as DPU'}
{'conf-name': 'vms-kvm-ciscovs-7nodes', 'group-name': 'vms9-1', 'topo': 'ciscovs-7nodes', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-c-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes'}
{'conf-name': 'vms-kvm-ciscovs-5nodes', 'group-name': 'vms9-1', 'topo': 'ciscovs-5nodes', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-c-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 5 nodes'}
2025-12-20 17:25:46.523666 : /root/workspace/PhoenixWingDailySRv6Test/na_lab/pytest_env/../..//sonic-mgmt/ansible/vars/topo_ciscovs-7nodes.yml : vms_yml {'PE1': {'vlans': [28], 'vm_offset': 0}, 'PE2': {'vlans': [29], 'vm_offset': 1}, 'PE3': {'vlans': [30], 'vm_offset': 2}, 'P3': {'vlans': [31], 'vm_offset': 3}, 'P2': {'vlans': [16], 'vm_offset': 4}, 'P4': {'vlans': [17], 'vm_offset': 5}}, len 6
2025-12-20 17:25:46.523719 : Run show ip interface for range(0, 6)
2025-12-20 17:25:46.523728 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.51 show ip interface'"
2025-12-20 17:25:49.031787 : Run show interface portchannel for range(0, 1)
2025-12-20 17:25:49.031823 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.125 show interface portchannel'"
2025-12-20 17:25:50.582995 : Current directory /root/workspace/PhoenixWingDailySRv6Test/na_lab/pytest_env
{'conf-name': 'vms-kvm-t0', 'group-name': 'vms6-1', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-64', 'group-name': 'vms6-1', 'topo': 't0-64', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-64-32', 'group-name': 'vms6-1', 'topo': 't0-64-32', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t1-lag', 'group-name': 'vms6-2', 'topo': 't1-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-02', 'ptf_ip': '10.250.0.106/24', 'ptf_ipv6': 'fec0::ffff:afa:6/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-03'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-2', 'group-name': 'vms6-3', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-03', 'ptf_ip': '10.250.0.108/24', 'ptf_ipv6': 'fec0::ffff:afa:8/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-04'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dual-t0', 'group-name': 'vms6-4', 'topo': 'dualtor', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-04', 'ptf_ip': '10.250.0.109/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'server': 'server_1', 'vm_base': 'VM0108', 'dut': ['vlab-05', 'vlab-06'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Dual-TOR testbed'}
{'conf-name': 'vms-kvm-multi-asic-t1-lag', 'group-name': 'vms6-4', 'topo': 't1-64-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-05', 'ptf_ip': '10.250.0.110/24', 'ptf_ipv6': 'fec0::ffff:afa:a/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-07'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests multi-asic virtual switch vm'}
{'conf-name': 'vms-kvm-four-asic-t1-lag', 'group-name': 'vms6-4', 'topo': 't1-8-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-05', 'ptf_ip': '10.250.0.110/24', 'ptf_ipv6': 'fec0::ffff:afa:a/64', 'server': 'server_1', 'vm_base': 'VM0128', 'dut': ['vlab-08'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests multi-asic virtual switch vm'}
{'conf-name': 'vms-kvm-t2', 'group-name': 'vms6-4', 'topo': 't2-vs', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-04', 'ptf_ip': '10.250.0.109/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-t2-01', 'vlab-t2-02', 'vlab-t2-sup'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'T2 Virtual chassis'}
{'conf-name': 'vms-kvm-t0-3', 'group-name': 'vms6-6', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-06', 'ptf_ip': '10.250.0.116/24', 'ptf_ipv6': 'fec0::ffff:afb:2/64', 'server': 'server_1', 'vm_base': 'VM0132', 'dut': ['vlab-09'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-4', 'group-name': 'vms6-7', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-07', 'ptf_ip': '10.250.0.118/24', 'ptf_ipv6': 'fec0::ffff:afb:4/64', 'server': 'server_1', 'vm_base': 'VM0136', 'dut': ['vlab-10'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dual-mixed', 'group-name': 'vms6-8', 'topo': 'dualtor-mixed', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-08', 'ptf_ip': '10.250.0.119/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'netns_mgmt_ip': '10.250.0.126/24', 'server': 'server_1', 'vm_base': 'VM0140', 'dut': ['vlab-11', 'vlab-12'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Dual-TOR-Mixed testbed'}
{'conf-name': '8000e-t0', 'group-name': 'vms8-1', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-8k-01', 'ptf_ip': '10.250.0.202/24', 'ptf_ipv6': 'fec0::ffff:afc:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-8k-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests 8000e sonic device'}
{'conf-name': '8000e-t1', 'group-name': 'vms8-1', 'topo': 't1', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-8k-01', 'ptf_ip': '10.250.0.202/24', 'ptf_ipv6': 'fec0::ffff:afc:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-8k-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests 8000e sonic device'}
{'conf-name': 'vms-kvm-wan-pub', 'group-name': 'vms6-1', 'topo': 'wan-pub', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-4link', 'group-name': 'vms6-1', 'topo': 'wan-4link', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-pub-cisco', 'group-name': 'vms6-1', 'topo': 'wan-pub-cisco', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-2dut', 'group-name': 'vms6-1', 'topo': 'wan-2dut', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01', 'vlab-03'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-3link-tg', 'group-name': 'vms6-1', 'topo': 'wan-3link-tg', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-ecmp', 'group-name': 'vms6-1', 'topo': 'wan-ecmp', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01', 'vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-pub-isis', 'group-name': 'vms6-1', 'topo': 'wan-pub-isis', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dpu', 'group-name': 'vms6-1', 'topo': 'dpu', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': False, 'comment': 'Tests virtual switch vm as DPU'}
{'conf-name': 'vms-kvm-ciscovs-7nodes', 'group-name': 'vms9-1', 'topo': 'ciscovs-7nodes', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-c-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes'}
{'conf-name': 'vms-kvm-ciscovs-5nodes', 'group-name': 'vms9-1', 'topo': 'ciscovs-5nodes', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-c-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 5 nodes'}
2025-12-20 17:25:50.632429 : /root/workspace/PhoenixWingDailySRv6Test/na_lab/pytest_env/../..//sonic-mgmt/ansible/vars/topo_ciscovs-7nodes.yml : vms_yml {'PE1': {'vlans': [28], 'vm_offset': 0}, 'PE2': {'vlans': [29], 'vm_offset': 1}, 'PE3': {'vlans': [30], 'vm_offset': 2}, 'P3': {'vlans': [31], 'vm_offset': 3}, 'P2': {'vlans': [16], 'vm_offset': 4}, 'P4': {'vlans': [17], 'vm_offset': 5}}, len 6
2025-12-20 17:25:50.632487 : Run show interface portchannel for range(0, 6)
Flags: A - active, I - inactive, Up - up, Dw - Down, N/A - not available,
       S - selected, D - deselected, * - not synced
No.    Team Dev    Protocol    Ports
-----  ----------  ----------  -------
Codes: K - kernel route, C - connected, S - static, R - RIP,
       O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
       T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
       F - PBR, f - OpenFabric,
       > - selected route, * - FIB route, q - queued route, r - rejected route

C>*10.250.0.0/24 is directly connected, eth0, 00:23:23
2025-12-20 17:25:50.632497 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.51 show interface portchannel'"
2025-12-20 17:25:52.094845 : Run show ip route for range(0, 1)
2025-12-20 17:25:52.094877 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.125 show ip route'"
2025-12-20 17:25:53.962486 : Current directory /root/workspace/PhoenixWingDailySRv6Test/na_lab/pytest_env
{'conf-name': 'vms-kvm-t0', 'group-name': 'vms6-1', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-64', 'group-name': 'vms6-1', 'topo': 't0-64', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-64-32', 'group-name': 'vms6-1', 'topo': 't0-64-32', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t1-lag', 'group-name': 'vms6-2', 'topo': 't1-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-02', 'ptf_ip': '10.250.0.106/24', 'ptf_ipv6': 'fec0::ffff:afa:6/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-03'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-2', 'group-name': 'vms6-3', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-03', 'ptf_ip': '10.250.0.108/24', 'ptf_ipv6': 'fec0::ffff:afa:8/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-04'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dual-t0', 'group-name': 'vms6-4', 'topo': 'dualtor', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-04', 'ptf_ip': '10.250.0.109/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'server': 'server_1', 'vm_base': 'VM0108', 'dut': ['vlab-05', 'vlab-06'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Dual-TOR testbed'}
{'conf-name': 'vms-kvm-multi-asic-t1-lag', 'group-name': 'vms6-4', 'topo': 't1-64-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-05', 'ptf_ip': '10.250.0.110/24', 'ptf_ipv6': 'fec0::ffff:afa:a/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-07'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests multi-asic virtual switch vm'}
{'conf-name': 'vms-kvm-four-asic-t1-lag', 'group-name': 'vms6-4', 'topo': 't1-8-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-05', 'ptf_ip': '10.250.0.110/24', 'ptf_ipv6': 'fec0::ffff:afa:a/64', 'server': 'server_1', 'vm_base': 'VM0128', 'dut': ['vlab-08'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests multi-asic virtual switch vm'}
{'conf-name': 'vms-kvm-t2', 'group-name': 'vms6-4', 'topo': 't2-vs', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-04', 'ptf_ip': '10.250.0.109/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-t2-01', 'vlab-t2-02', 'vlab-t2-sup'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'T2 Virtual chassis'}
{'conf-name': 'vms-kvm-t0-3', 'group-name': 'vms6-6', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-06', 'ptf_ip': '10.250.0.116/24', 'ptf_ipv6': 'fec0::ffff:afb:2/64', 'server': 'server_1', 'vm_base': 'VM0132', 'dut': ['vlab-09'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-4', 'group-name': 'vms6-7', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-07', 'ptf_ip': '10.250.0.118/24', 'ptf_ipv6': 'fec0::ffff:afb:4/64', 'server': 'server_1', 'vm_base': 'VM0136', 'dut': ['vlab-10'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dual-mixed', 'group-name': 'vms6-8', 'topo': 'dualtor-mixed', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-08', 'ptf_ip': '10.250.0.119/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'netns_mgmt_ip': '10.250.0.126/24', 'server': 'server_1', 'vm_base': 'VM0140', 'dut': ['vlab-11', 'vlab-12'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Dual-TOR-Mixed testbed'}
{'conf-name': '8000e-t0', 'group-name': 'vms8-1', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-8k-01', 'ptf_ip': '10.250.0.202/24', 'ptf_ipv6': 'fec0::ffff:afc:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-8k-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests 8000e sonic device'}
{'conf-name': '8000e-t1', 'group-name': 'vms8-1', 'topo': 't1', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-8k-01', 'ptf_ip': '10.250.0.202/24', 'ptf_ipv6': 'fec0::ffff:afc:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-8k-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests 8000e sonic device'}
{'conf-name': 'vms-kvm-wan-pub', 'group-name': 'vms6-1', 'topo': 'wan-pub', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-4link', 'group-name': 'vms6-1', 'topo': 'wan-4link', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-pub-cisco', 'group-name': 'vms6-1', 'topo': 'wan-pub-cisco', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-2dut', 'group-name': 'vms6-1', 'topo': 'wan-2dut', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01', 'vlab-03'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-3link-tg', 'group-name': 'vms6-1', 'topo': 'wan-3link-tg', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-ecmp', 'group-name': 'vms6-1', 'topo': 'wan-ecmp', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01', 'vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-pub-isis', 'group-name': 'vms6-1', 'topo': 'wan-pub-isis', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
Codes: K - kernel route, C - connected, S - static, R - RIP,
       O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
       T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
       F - PBR, f - OpenFabric,
       > - selected route, * - FIB route, q - queued route, r - rejected route

C>*10.250.0.0/24 is directly connected, eth0, 00:05:29
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:72:07:24 brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:e8:e8:7a brd ff:ff:ff:ff:ff:ff
4: eth2: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:6f:48:c4 brd ff:ff:ff:ff:ff:ff
5: eth3: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:53:87:dd brd ff:ff:ff:ff:ff:ff
6: eth4: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:c9:5f:96 brd ff:ff:ff:ff:ff:ff
7: eth5: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:dc:c3:d4 brd ff:ff:ff:ff:ff:ff
8: eth6: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:cf:34:8c brd ff:ff:ff:ff:ff:ff
9: eth7: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:9f:8e:4c brd ff:ff:ff:ff:ff:ff
10: eth8: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:05:d8:3f brd ff:ff:ff:ff:ff:ff
11: eth9: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:9e:b3:3c brd ff:ff:ff:ff:ff:ff
12: eth10: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:40:ff:82 brd ff:ff:ff:ff:ff:ff
13: eth11: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:3b:7c:25 brd ff:ff:ff:ff:ff:ff
14: eth12: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:9d:78:90 brd ff:ff:ff:ff:ff:ff
15: eth13: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:16:d3:6c brd ff:ff:ff:ff:ff:ff
16: eth14: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:50:95:8a brd ff:ff:ff:ff:ff:ff
17: eth15: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:fe:41:2f brd ff:ff:ff:ff:ff:ff
18: eth16: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:cc:6c:71 brd ff:ff:ff:ff:ff:ff
19: eth17: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:d2:7b:e2 brd ff:ff:ff:ff:ff:ff
20: eth18: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:f2:26:cf brd ff:ff:ff:ff:ff:ff
21: eth19: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:66:89:60 brd ff:ff:ff:ff:ff:ff
22: eth20: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:02:1e:cd brd ff:ff:ff:ff:ff:ff
23: eth21: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:f1:c7:67 brd ff:ff:ff:ff:ff:ff
24: eth22: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:f7:a1:a6 brd ff:ff:ff:ff:ff:ff
25: eth23: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:43:75:62 brd ff:ff:ff:ff:ff:ff
26: eth24: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:1e:58:df brd ff:ff:ff:ff:ff:ff
27: eth25: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:28:2d:e5 brd ff:ff:ff:ff:ff:ff
28: eth26: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:20:7f:db brd ff:ff:ff:ff:ff:ff
29: eth27: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:d6:6f:ad brd ff:ff:ff:ff:ff:ff
30: eth28: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:fc:d2:18 brd ff:ff:ff:ff:ff:ff
31: eth29: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:3c:8c:5d brd ff:ff:ff:ff:ff:ff
32: eth30: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:cc:22:fb brd ff:ff:ff:ff:ff:ff
33: eth31: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:f3:ed:fb brd ff:ff:ff:ff:ff:ff
34: eth32: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:1f:44:b4 brd ff:ff:ff:ff:ff:ff
35: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default 
    link/ether 02:42:e8:92:02:1e brd ff:ff:ff:ff:ff:ff
36: swveth1@veth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether ca:fe:5a:16:a6:b8 brd ff:ff:ff:ff:ff:ff
37: veth1@swveth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 06:cf:fe:aa:e7:22 brd ff:ff:ff:ff:ff:ff
38: swveth2@veth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether b6:18:59:01:dd:ac brd ff:ff:ff:ff:ff:ff
39: veth2@swveth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 82:df:14:de:16:73 brd ff:ff:ff:ff:ff:ff
40: swveth3@veth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether c6:db:30:0f:dc:de brd ff:ff:ff:ff:ff:ff
41: veth3@swveth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 96:17:c7:3d:0c:99 brd ff:ff:ff:ff:ff:ff
42: swveth4@veth4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether c2:f6:79:57:35:8e brd ff:ff:ff:ff:ff:ff
43: veth4@swveth4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 12:99:b2:38:81:d9 brd ff:ff:ff:ff:ff:ff
44: swveth5@veth5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether f2:ee:b5:99:39:ec brd ff:ff:ff:ff:ff:ff
45: veth5@swveth5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 4e:05:63:81:da:23 brd ff:ff:ff:ff:ff:ff
47: swveth6@veth6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether fa:74:fd:ab:e2:7c brd ff:ff:ff:ff:ff:ff
48: veth6@swveth6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether c2:dd:f7:5a:66:ee brd ff:ff:ff:ff:ff:ff
49: swveth7@veth7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 36:c0:3c:2d:d6:b4 brd ff:ff:ff:ff:ff:ff
50: veth7@swveth7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 2a:d8:1f:8f:e0:7f brd ff:ff:ff:ff:ff:ff
51: swveth8@veth8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether d6:65:02:ac:a9:8b brd ff:ff:ff:ff:ff:ff
52: veth8@swveth8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 2e:fa:1e:4c:9d:b1 brd ff:ff:ff:ff:ff:ff
54: swveth9@veth9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether b2:c4:f9:d3:a2:49 brd ff:ff:ff:ff:ff:ff
55: veth9@swveth9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether aa:ff:e1:e9:d0:9b brd ff:ff:ff:ff:ff:ff
56: swveth10@veth10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether d6:08:74:4a:e7:cd brd ff:ff:ff:ff:ff:ff
57: veth10@swveth10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 12:c2:f0:eb:b8:44 brd ff:ff:ff:ff:ff:ff
59: swveth11@veth11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 1e:1a:dd:f4:fd:3e brd ff:ff:ff:ff:ff:ff
60: veth11@swveth11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether c6:5d:70:34:cd:b3 brd ff:ff:ff:ff:ff:ff
61: swveth12@veth12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether fa:75:e5:11:c9:05 brd ff:ff:ff:ff:ff:ff
62: veth12@swveth12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 32:07:e4:16:1a:e9 brd ff:ff:ff:ff:ff:ff
63: swveth13@veth13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 6e:13:29:eb:48:e4 brd ff:ff:ff:ff:ff:ff
64: veth13@swveth13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether ae:d5:bd:e7:68:06 brd ff:ff:ff:ff:ff:ff
65: swveth14@veth14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether ee:9f:09:6a:85:67 brd ff:ff:ff:ff:ff:ff
66: veth14@swveth14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether be:a4:e8:54:f7:10 brd ff:ff:ff:ff:ff:ff
67: swveth15@veth15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 02:8c:fa:a4:4e:f9 brd ff:ff:ff:ff:ff:ff
68: veth15@swveth15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 3a:34:5b:76:e7:91 brd ff:ff:ff:ff:ff:ff
69: swveth16@veth16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether fa:7e:b5:0b:3f:ee brd ff:ff:ff:ff:ff:ff
70: veth16@swveth16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 2e:ba:30:66:35:f5 brd ff:ff:ff:ff:ff:ff
71: swveth17@veth17: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether aa:99:28:11:78:e9 brd ff:ff:ff:ff:ff:ff
72: veth17@swveth17: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether b2:aa:5a:57:4b:1c brd ff:ff:ff:ff:ff:ff
73: swveth18@veth18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether f2:19:c4:70:d5:2b brd ff:ff:ff:ff:ff:ff
74: veth18@swveth18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 8a:c9:53:f0:97:64 brd ff:ff:ff:ff:ff:ff
75: swveth19@veth19: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether fe:0d:da:32:4a:37 brd ff:ff:ff:ff:ff:ff
76: veth19@swveth19: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether da:01:fb:c0:00:a4 brd ff:ff:ff:ff:ff:ff
77: swveth20@veth20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether de:cb:83:f9:f6:0a brd ff:ff:ff:ff:ff:ff
78: veth20@swveth20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 46:a3:07:75:a9:2e brd ff:ff:ff:ff:ff:ff
79: swveth21@veth21: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether c6:5c:ac:07:fc:75 brd ff:ff:ff:ff:ff:ff
80: veth21@swveth21: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 92:45:71:f8:65:29 brd ff:ff:ff:ff:ff:ff
81: swveth22@veth22: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether fe:77:ff:e5:ff:ee brd ff:ff:ff:ff:ff:ff
82: veth22@swveth22: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 26:3c:e1:47:86:f2 brd ff:ff:ff:ff:ff:ff
83: swveth23@veth23: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether a2:7e:9c:f9:19:61 brd ff:ff:ff:ff:ff:ff
84: veth23@swveth23: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 9e:bc:49:9f:f8:36 brd ff:ff:ff:ff:ff:ff
85: swveth24@veth24: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 8e:24:1f:5d:63:ee brd ff:ff:ff:ff:ff:ff
86: veth24@swveth24: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether aa:c1:45:05:71:09 brd ff:ff:ff:ff:ff:ff
87: swveth25@veth25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 02:cd:ad:f1:02:60 brd ff:ff:ff:ff:ff:ff
88: veth25@swveth25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 16:ce:47:b3:18:1f brd ff:ff:ff:ff:ff:ff
89: swveth26@veth26: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 42:6c:0c:f0:e4:ca brd ff:ff:ff:ff:ff:ff
90: veth26@swveth26: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether a2:ba:2e:93:ab:ba brd ff:ff:ff:ff:ff:ff
91: swveth27@veth27: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether ea:65:f7:a3:b5:8a brd ff:ff:ff:ff:ff:ff
92: veth27@swveth27: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether ba:8c:89:36:a8:5a brd ff:ff:ff:ff:ff:ff
93: swveth28@veth28: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether b2:00:d1:01:c2:93 brd ff:ff:ff:ff:ff:ff
94: veth28@swveth28: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 76:89:05:77:96:91 brd ff:ff:ff:ff:ff:ff
95: swveth29@veth29: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 4a:26:dc:77:62:2c brd ff:ff:ff:ff:ff:ff
96: veth29@swveth29: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 6e:0c:ca:e7:84:2c brd ff:ff:ff:ff:ff:ff
97: swveth30@veth30: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 12:c1:a2:09:98:a2 brd ff:ff:ff:ff:ff:ff
98: veth30@swveth30: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether ee:ac:8e:bc:d0:ac brd ff:ff:ff:ff:ff:ff
99: swveth31@veth31: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 12:19:c8:98:68:ca brd ff:ff:ff:ff:ff:ff
100: veth31@swveth31: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 56:57:50:a5:e3:56 brd ff:ff:ff:ff:ff:ff
101: swveth32@veth32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 42:ba:80:b4:ce:9e brd ff:ff:ff:ff:ff:ff
102: veth32@swveth32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 52:0c:50:29:2d:a0 brd ff:ff:ff:ff:ff:ff
103: pimreg@NONE: <NOARP,UP,LOWER_UP> mtu 1472 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/pimreg 
104: Bridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
105: Loopback0: <BROADCAST,NOARP,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 9e:aa:ce:b4:8a:f3 brd ff:ff:ff:ff:ff:ff
106: dummy: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue master Bridge state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 42:b8:ce:ca:7d:2a brd ff:ff:ff:ff:ff:ff
107: Ethernet92: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
108: Ethernet100: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
109: Ethernet88: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
110: Ethernet96: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
111: Ethernet80: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
112: Ethernet84: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
113: Ethernet120: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
114: Ethernet112: <BROADCAST,MULTICAST> mtu 9100 qdisc fq_codel state DOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
115: Ethernet124: <BROADCAST,MULTICAST> mtu 9100 qdisc fq_codel state DOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
116: Ethernet108: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
117: Ethernet116: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
118: Ethernet104: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
119: Ethernet76: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
120: Ethernet72: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
121: Ethernet68: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
122: Ethernet64: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
123: Ethernet48: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
124: Ethernet60: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
125: Ethernet56: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
126: Ethernet16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
127: Ethernet52: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
128: Ethernet20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
129: Ethernet12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
130: Ethernet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
131: Ethernet8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
132: Ethernet4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
133: Ethernet24: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
134: Ethernet40: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
135: Ethernet44: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
136: Ethernet36: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
137: Ethernet28: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
138: Ethernet32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:00:00:4e brd ff:ff:ff:ff:ff:ff
{'conf-name': 'vms-kvm-dpu', 'group-name': 'vms6-1', 'topo': 'dpu', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': False, 'comment': 'Tests virtual switch vm as DPU'}
{'conf-name': 'vms-kvm-ciscovs-7nodes', 'group-name': 'vms9-1', 'topo': 'ciscovs-7nodes', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-c-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes'}
{'conf-name': 'vms-kvm-ciscovs-5nodes', 'group-name': 'vms9-1', 'topo': 'ciscovs-5nodes', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-c-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 5 nodes'}
2025-12-20 17:25:54.025585 : /root/workspace/PhoenixWingDailySRv6Test/na_lab/pytest_env/../..//sonic-mgmt/ansible/vars/topo_ciscovs-7nodes.yml : vms_yml {'PE1': {'vlans': [28], 'vm_offset': 0}, 'PE2': {'vlans': [29], 'vm_offset': 1}, 'PE3': {'vlans': [30], 'vm_offset': 2}, 'P3': {'vlans': [31], 'vm_offset': 3}, 'P2': {'vlans': [16], 'vm_offset': 4}, 'P4': {'vlans': [17], 'vm_offset': 5}}, len 6
2025-12-20 17:25:54.025648 : Run show ip route for range(0, 6)
2025-12-20 17:25:54.025660 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.51 show ip route'"
2025-12-20 17:25:55.792079 : Run ip link for range(0, 1)
2025-12-20 17:25:55.792117 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.125 ip link'"
2025-12-20 17:25:56.884787 : Current directory /root/workspace/PhoenixWingDailySRv6Test/na_lab/pytest_env
{'conf-name': 'vms-kvm-t0', 'group-name': 'vms6-1', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-64', 'group-name': 'vms6-1', 'topo': 't0-64', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-64-32', 'group-name': 'vms6-1', 'topo': 't0-64-32', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t1-lag', 'group-name': 'vms6-2', 'topo': 't1-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-02', 'ptf_ip': '10.250.0.106/24', 'ptf_ipv6': 'fec0::ffff:afa:6/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-03'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-2', 'group-name': 'vms6-3', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-03', 'ptf_ip': '10.250.0.108/24', 'ptf_ipv6': 'fec0::ffff:afa:8/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-04'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dual-t0', 'group-name': 'vms6-4', 'topo': 'dualtor', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-04', 'ptf_ip': '10.250.0.109/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'server': 'server_1', 'vm_base': 'VM0108', 'dut': ['vlab-05', 'vlab-06'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Dual-TOR testbed'}
{'conf-name': 'vms-kvm-multi-asic-t1-lag', 'group-name': 'vms6-4', 'topo': 't1-64-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-05', 'ptf_ip': '10.250.0.110/24', 'ptf_ipv6': 'fec0::ffff:afa:a/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-07'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests multi-asic virtual switch vm'}
{'conf-name': 'vms-kvm-four-asic-t1-lag', 'group-name': 'vms6-4', 'topo': 't1-8-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-05', 'ptf_ip': '10.250.0.110/24', 'ptf_ipv6': 'fec0::ffff:afa:a/64', 'server': 'server_1', 'vm_base': 'VM0128', 'dut': ['vlab-08'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests multi-asic virtual switch vm'}
{'conf-name': 'vms-kvm-t2', 'group-name': 'vms6-4', 'topo': 't2-vs', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-04', 'ptf_ip': '10.250.0.109/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-t2-01', 'vlab-t2-02', 'vlab-t2-sup'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'T2 Virtual chassis'}
{'conf-name': 'vms-kvm-t0-3', 'group-name': 'vms6-6', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-06', 'ptf_ip': '10.250.0.116/24', 'ptf_ipv6': 'fec0::ffff:afb:2/64', 'server': 'server_1', 'vm_base': 'VM0132', 'dut': ['vlab-09'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-4', 'group-name': 'vms6-7', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-07', 'ptf_ip': '10.250.0.118/24', 'ptf_ipv6': 'fec0::ffff:afb:4/64', 'server': 'server_1', 'vm_base': 'VM0136', 'dut': ['vlab-10'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dual-mixed', 'group-name': 'vms6-8', 'topo': 'dualtor-mixed', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-08', 'ptf_ip': '10.250.0.119/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'netns_mgmt_ip': '10.250.0.126/24', 'server': 'server_1', 'vm_base': 'VM0140', 'dut': ['vlab-11', 'vlab-12'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Dual-TOR-Mixed testbed'}
{'conf-name': '8000e-t0', 'group-name': 'vms8-1', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-8k-01', 'ptf_ip': '10.250.0.202/24', 'ptf_ipv6': 'fec0::ffff:afc:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-8k-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests 8000e sonic device'}
{'conf-name': '8000e-t1', 'group-name': 'vms8-1', 'topo': 't1', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-8k-01', 'ptf_ip': '10.250.0.202/24', 'ptf_ipv6': 'fec0::ffff:afc:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-8k-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests 8000e sonic device'}
{'conf-name': 'vms-kvm-wan-pub', 'group-name': 'vms6-1', 'topo': 'wan-pub', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-4link', 'group-name': 'vms6-1', 'topo': 'wan-4link', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-pub-cisco', 'group-name': 'vms6-1', 'topo': 'wan-pub-cisco', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:a0:1d:fd brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:f0:97:24 brd ff:ff:ff:ff:ff:ff
4: eth2: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:47:75:66 brd ff:ff:ff:ff:ff:ff
5: eth3: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:c6:6c:8b brd ff:ff:ff:ff:ff:ff
6: eth4: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:03:af:a8 brd ff:ff:ff:ff:ff:ff
7: eth5: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:30:a2:66 brd ff:ff:ff:ff:ff:ff
8: eth6: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:2b:f3:2b brd ff:ff:ff:ff:ff:ff
9: eth7: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:62:78:05 brd ff:ff:ff:ff:ff:ff
10: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default 
    link/ether 02:42:da:0f:75:69 brd ff:ff:ff:ff:ff:ff
11: swveth1@veth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether fe:af:11:67:3b:b9 brd ff:ff:ff:ff:ff:ff
12: veth1@swveth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether e2:e8:70:d3:c9:1b brd ff:ff:ff:ff:ff:ff
13: swveth2@veth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether d2:9b:8b:b8:0e:e1 brd ff:ff:ff:ff:ff:ff
14: veth2@swveth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether da:93:5a:80:56:42 brd ff:ff:ff:ff:ff:ff
15: swveth3@veth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether ce:15:0c:b8:9b:27 brd ff:ff:ff:ff:ff:ff
16: veth3@swveth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether f2:4c:d7:fc:dc:e9 brd ff:ff:ff:ff:ff:ff
17: swveth4@veth4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 22:db:68:fb:da:67 brd ff:ff:ff:ff:ff:ff
18: veth4@swveth4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 0a:b4:aa:b8:5a:e3 brd ff:ff:ff:ff:ff:ff
19: swveth5@veth5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 32:a3:c7:0b:f6:aa brd ff:ff:ff:ff:ff:ff
20: veth5@swveth5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 7e:39:96:6a:5a:33 brd ff:ff:ff:ff:ff:ff
21: swveth6@veth6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 9a:cc:bd:83:70:d5 brd ff:ff:ff:ff:ff:ff
22: veth6@swveth6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether a2:e2:ef:c9:90:ee brd ff:ff:ff:ff:ff:ff
23: swveth7@veth7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether da:67:92:7b:9a:1d brd ff:ff:ff:ff:ff:ff
24: veth7@swveth7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether ca:50:45:6a:5b:e4 brd ff:ff:ff:ff:ff:ff
25: swveth8@veth8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether aa:15:21:be:c7:c6 brd ff:ff:ff:ff:ff:ff
26: veth8@swveth8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 3e:93:3a:58:7a:87 brd ff:ff:ff:ff:ff:ff
27: swveth9@veth9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether fe:d3:fc:d5:f1:57 brd ff:ff:ff:ff:ff:ff
28: veth9@swveth9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether ce:f1:e2:b7:99:45 brd ff:ff:ff:ff:ff:ff
29: swveth10@veth10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 3a:53:e5:28:af:1e brd ff:ff:ff:ff:ff:ff
30: veth10@swveth10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 8e:db:1c:01:cb:d4 brd ff:ff:ff:ff:ff:ff
31: swveth11@veth11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 26:7f:e8:82:04:d2 brd ff:ff:ff:ff:ff:ff
32: veth11@swveth11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 76:d6:ea:14:82:ee brd ff:ff:ff:ff:ff:ff
33: swveth12@veth12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 46:08:a3:cd:b8:9e brd ff:ff:ff:ff:ff:ff
34: veth12@swveth12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether ae:44:73:8c:46:4e brd ff:ff:ff:ff:ff:ff
35: swveth13@veth13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 06:32:3a:de:22:7b brd ff:ff:ff:ff:ff:ff
36: veth13@swveth13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 1a:e3:e7:bc:f6:07 brd ff:ff:ff:ff:ff:ff
37: swveth14@veth14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 5e:b6:c5:69:27:c6 brd ff:ff:ff:ff:ff:ff
38: veth14@swveth14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 26:e6:5c:9b:fd:ee brd ff:ff:ff:ff:ff:ff
39: swveth15@veth15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 9e:01:d7:c9:c3:c4 brd ff:ff:ff:ff:ff:ff
40: veth15@swveth15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether aa:2f:06:f9:70:eb brd ff:ff:ff:ff:ff:ff
41: swveth16@veth16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether a6:13:6b:b5:6b:e2 brd ff:ff:ff:ff:ff:ff
42: veth16@swveth16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether b2:ff:f0:41:06:b6 brd ff:ff:ff:ff:ff:ff
43: swveth17@veth17: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 16:16:5a:81:f1:12 brd ff:ff:ff:ff:ff:ff
44: veth17@swveth17: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 1a:bc:dd:62:b3:57 brd ff:ff:ff:ff:ff:ff
45: swveth18@veth18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether e6:21:e8:41:5b:2c brd ff:ff:ff:ff:ff:ff
46: veth18@swveth18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 06:ff:ca:91:69:6e brd ff:ff:ff:ff:ff:ff
47: swveth19@veth19: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 9e:ce:a6:1e:fc:cd brd ff:ff:ff:ff:ff:ff
48: veth19@swveth19: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether a2:bd:8c:19:27:df brd ff:ff:ff:ff:ff:ff
49: swveth20@veth20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether e6:0c:58:40:a3:3c brd ff:ff:ff:ff:ff:ff
50: veth20@swveth20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 2e:a3:91:22:25:33 brd ff:ff:ff:ff:ff:ff
51: swveth21@veth21: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 92:c5:42:8a:30:11 brd ff:ff:ff:ff:ff:ff
52: veth21@swveth21: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 22:3e:89:79:2c:4c brd ff:ff:ff:ff:ff:ff
53: swveth22@veth22: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 36:40:eb:e5:83:cc brd ff:ff:ff:ff:ff:ff
54: veth22@swveth22: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 6e:97:b8:f3:cc:b1 brd ff:ff:ff:ff:ff:ff
55: swveth23@veth23: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether aa:5f:54:c3:af:ec brd ff:ff:ff:ff:ff:ff
56: veth23@swveth23: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 56:99:ec:d3:41:4e brd ff:ff:ff:ff:ff:ff
57: swveth24@veth24: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 32:8c:8e:00:78:ef brd ff:ff:ff:ff:ff:ff
58: veth24@swveth24: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 12:49:0d:1b:77:87 brd ff:ff:ff:ff:ff:ff
59: swveth25@veth25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 66:0b:bd:2a:af:74 brd ff:ff:ff:ff:ff:ff
60: veth25@swveth25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 72:7b:c4:bc:8f:85 brd ff:ff:ff:ff:ff:ff
61: swveth26@veth26: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 76:13:74:1d:c4:cd brd ff:ff:ff:ff:ff:ff
62: veth26@swveth26: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 26:c7:20:33:fc:1a brd ff:ff:ff:ff:ff:ff
63: swveth27@veth27: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether a6:09:57:7c:9c:16 brd ff:ff:ff:ff:ff:ff
64: veth27@swveth27: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether fe:45:0d:26:9a:c2 brd ff:ff:ff:ff:ff:ff
65: swveth28@veth28: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether f6:7f:c3:50:d4:1e brd ff:ff:ff:ff:ff:ff
66: veth28@swveth28: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 2a:f1:b2:fe:76:0d brd ff:ff:ff:ff:ff:ff
67: swveth29@veth29: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether de:82:7c:0a:31:7b brd ff:ff:ff:ff:ff:ff
68: veth29@swveth29: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 5a:ea:da:55:32:05 brd ff:ff:ff:ff:ff:ff
69: swveth30@veth30: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 26:56:40:de:e5:82 brd ff:ff:ff:ff:ff:ff
70: veth30@swveth30: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether ce:23:2f:0f:68:0e brd ff:ff:ff:ff:ff:ff
71: swveth31@veth31: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 46:e3:27:cc:59:62 brd ff:ff:ff:ff:ff:ff
72: veth31@swveth31: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether ee:fb:2b:08:fa:64 brd ff:ff:ff:ff:ff:ff
73: swveth32@veth32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether aa:a4:55:8d:dc:f5 brd ff:ff:ff:ff:ff:ff
74: veth32@swveth32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether b6:48:ec:85:e9:16 brd ff:ff:ff:ff:ff:ff
124: pimreg@NONE: <NOARP,UP,LOWER_UP> mtu 1472 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/pimreg 
127: Bridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:df:0c:4e brd ff:ff:ff:ff:ff:ff
128: Loopback0: <BROADCAST,NOARP,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 0e:ee:9d:3c:e2:11 brd ff:ff:ff:ff:ff:ff
129: dummy: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue master Bridge state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether e2:a9:2d:af:e3:f0 brd ff:ff:ff:ff:ff:ff
130: Vrf1: <NOARP,MASTER,UP,LOWER_UP> mtu 65575 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 1a:16:79:82:ad:a9 brd ff:ff:ff:ff:ff:ff
131: Vrf2: <NOARP,MASTER,UP,LOWER_UP> mtu 65575 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether e6:9c:e5:ea:b4:20 brd ff:ff:ff:ff:ff:ff
10.250.0.0/24 dev eth0 proto kernel scope link src 10.250.0.125 
240.127.1.0/24 dev docker0 proto kernel scope link src 240.127.1.1 linkdown 
{'conf-name': 'vms-kvm-wan-2dut', 'group-name': 'vms6-1', 'topo': 'wan-2dut', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01', 'vlab-03'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-3link-tg', 'group-name': 'vms6-1', 'topo': 'wan-3link-tg', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-ecmp', 'group-name': 'vms6-1', 'topo': 'wan-ecmp', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01', 'vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-pub-isis', 'group-name': 'vms6-1', 'topo': 'wan-pub-isis', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dpu', 'group-name': 'vms6-1', 'topo': 'dpu', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': False, 'comment': 'Tests virtual switch vm as DPU'}
{'conf-name': 'vms-kvm-ciscovs-7nodes', 'group-name': 'vms9-1', 'topo': 'ciscovs-7nodes', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-c-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes'}
{'conf-name': 'vms-kvm-ciscovs-5nodes', 'group-name': 'vms9-1', 'topo': 'ciscovs-5nodes', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-c-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 5 nodes'}
2025-12-20 17:25:56.947949 : /root/workspace/PhoenixWingDailySRv6Test/na_lab/pytest_env/../..//sonic-mgmt/ansible/vars/topo_ciscovs-7nodes.yml : vms_yml {'PE1': {'vlans': [28], 'vm_offset': 0}, 'PE2': {'vlans': [29], 'vm_offset': 1}, 'PE3': {'vlans': [30], 'vm_offset': 2}, 'P3': {'vlans': [31], 'vm_offset': 3}, 'P2': {'vlans': [16], 'vm_offset': 4}, 'P4': {'vlans': [17], 'vm_offset': 5}}, len 6
2025-12-20 17:25:56.948013 : Run ip link for range(0, 6)
2025-12-20 17:25:56.948026 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.51 ip link'"
2025-12-20 17:25:57.984531 : Run ip route for range(0, 1)
2025-12-20 17:25:57.984567 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.125 ip route'"
2025-12-20 17:25:59.293390 : Current directory /root/workspace/PhoenixWingDailySRv6Test/na_lab/pytest_env
{'conf-name': 'vms-kvm-t0', 'group-name': 'vms6-1', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-64', 'group-name': 'vms6-1', 'topo': 't0-64', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-64-32', 'group-name': 'vms6-1', 'topo': 't0-64-32', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t1-lag', 'group-name': 'vms6-2', 'topo': 't1-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-02', 'ptf_ip': '10.250.0.106/24', 'ptf_ipv6': 'fec0::ffff:afa:6/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-03'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-2', 'group-name': 'vms6-3', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-03', 'ptf_ip': '10.250.0.108/24', 'ptf_ipv6': 'fec0::ffff:afa:8/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-04'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dual-t0', 'group-name': 'vms6-4', 'topo': 'dualtor', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-04', 'ptf_ip': '10.250.0.109/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'server': 'server_1', 'vm_base': 'VM0108', 'dut': ['vlab-05', 'vlab-06'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Dual-TOR testbed'}
{'conf-name': 'vms-kvm-multi-asic-t1-lag', 'group-name': 'vms6-4', 'topo': 't1-64-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-05', 'ptf_ip': '10.250.0.110/24', 'ptf_ipv6': 'fec0::ffff:afa:a/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-07'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests multi-asic virtual switch vm'}
{'conf-name': 'vms-kvm-four-asic-t1-lag', 'group-name': 'vms6-4', 'topo': 't1-8-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-05', 'ptf_ip': '10.250.0.110/24', 'ptf_ipv6': 'fec0::ffff:afa:a/64', 'server': 'server_1', 'vm_base': 'VM0128', 'dut': ['vlab-08'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests multi-asic virtual switch vm'}
{'conf-name': 'vms-kvm-t2', 'group-name': 'vms6-4', 'topo': 't2-vs', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-04', 'ptf_ip': '10.250.0.109/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-t2-01', 'vlab-t2-02', 'vlab-t2-sup'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'T2 Virtual chassis'}
{'conf-name': 'vms-kvm-t0-3', 'group-name': 'vms6-6', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-06', 'ptf_ip': '10.250.0.116/24', 'ptf_ipv6': 'fec0::ffff:afb:2/64', 'server': 'server_1', 'vm_base': 'VM0132', 'dut': ['vlab-09'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-4', 'group-name': 'vms6-7', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-07', 'ptf_ip': '10.250.0.118/24', 'ptf_ipv6': 'fec0::ffff:afb:4/64', 'server': 'server_1', 'vm_base': 'VM0136', 'dut': ['vlab-10'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dual-mixed', 'group-name': 'vms6-8', 'topo': 'dualtor-mixed', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-08', 'ptf_ip': '10.250.0.119/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'netns_mgmt_ip': '10.250.0.126/24', 'server': 'server_1', 'vm_base': 'VM0140', 'dut': ['vlab-11', 'vlab-12'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Dual-TOR-Mixed testbed'}
10.250.0.0/24 dev eth0 proto kernel scope link src 10.250.0.51 
240.127.1.0/24 dev docker0 proto kernel scope link src 240.127.1.1 linkdown 
{'conf-name': '8000e-t0', 'group-name': 'vms8-1', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-8k-01', 'ptf_ip': '10.250.0.202/24', 'ptf_ipv6': 'fec0::ffff:afc:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-8k-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests 8000e sonic device'}
{'conf-name': '8000e-t1', 'group-name': 'vms8-1', 'topo': 't1', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-8k-01', 'ptf_ip': '10.250.0.202/24', 'ptf_ipv6': 'fec0::ffff:afc:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-8k-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests 8000e sonic device'}
{'conf-name': 'vms-kvm-wan-pub', 'group-name': 'vms6-1', 'topo': 'wan-pub', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-4link', 'group-name': 'vms6-1', 'topo': 'wan-4link', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-pub-cisco', 'group-name': 'vms6-1', 'topo': 'wan-pub-cisco', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-2dut', 'group-name': 'vms6-1', 'topo': 'wan-2dut', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01', 'vlab-03'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-3link-tg', 'group-name': 'vms6-1', 'topo': 'wan-3link-tg', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-ecmp', 'group-name': 'vms6-1', 'topo': 'wan-ecmp', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01', 'vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-pub-isis', 'group-name': 'vms6-1', 'topo': 'wan-pub-isis', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dpu', 'group-name': 'vms6-1', 'topo': 'dpu', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': False, 'comment': 'Tests virtual switch vm as DPU'}
{'conf-name': 'vms-kvm-ciscovs-7nodes', 'group-name': 'vms9-1', 'topo': 'ciscovs-7nodes', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-c-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes'}
{'conf-name': 'vms-kvm-ciscovs-5nodes', 'group-name': 'vms9-1', 'topo': 'ciscovs-5nodes', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-c-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 5 nodes'}
2025-12-20 17:25:59.356285 : /root/workspace/PhoenixWingDailySRv6Test/na_lab/pytest_env/../..//sonic-mgmt/ansible/vars/topo_ciscovs-7nodes.yml : vms_yml {'PE1': {'vlans': [28], 'vm_offset': 0}, 'PE2': {'vlans': [29], 'vm_offset': 1}, 'PE3': {'vlans': [30], 'vm_offset': 2}, 'P3': {'vlans': [31], 'vm_offset': 3}, 'P2': {'vlans': [16], 'vm_offset': 4}, 'P4': {'vlans': [17], 'vm_offset': 5}}, len 6
2025-12-20 17:25:59.356355 : Run ip route for range(0, 6)
2025-12-20 17:25:59.356367 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.51 ip route'"
2025-12-20 17:26:00.387989 : rm -rf /tmp/local_cache//1766221812.8726053/
--- 947.5176649093628 seconds ---
