AzDevOps
bin
env-python3
Looking in indexes: https://mirrors.aliyun.com/pypi/simple/
Collecting paramiko>=3.5.1
  Using cached https://mirrors.aliyun.com/pypi/packages/15/f8/c7bd0ef12954a81a1d3cea60a13946bd9a49a0036a5927770c461eade7ae/paramiko-3.5.1-py3-none-any.whl (227 kB)
Requirement already satisfied: bcrypt>=3.2 in ./env-python3/lib/python3.8/site-packages (from paramiko>=3.5.1) (4.0.1)
Requirement already satisfied: cryptography>=3.3 in ./env-python3/lib/python3.8/site-packages (from paramiko>=3.5.1) (3.3.2)
Requirement already satisfied: pynacl>=1.5 in ./env-python3/lib/python3.8/site-packages (from paramiko>=3.5.1) (1.5.0)
Requirement already satisfied: six>=1.4.1 in ./env-python3/lib/python3.8/site-packages (from cryptography>=3.3->paramiko>=3.5.1) (1.16.0)
Requirement already satisfied: cffi>=1.12 in ./env-python3/lib/python3.8/site-packages (from cryptography>=3.3->paramiko>=3.5.1) (1.15.1)
Requirement already satisfied: pycparser in ./env-python3/lib/python3.8/site-packages (from cffi>=1.12->cryptography>=3.3->paramiko>=3.5.1) (2.21)
Installing collected packages: paramiko
  Attempting uninstall: paramiko
    Found existing installation: paramiko 2.7.1
    Uninstalling paramiko-2.7.1:
      Successfully uninstalled paramiko-2.7.1
Successfully installed paramiko-3.5.1
=== Running tests in groups ===
Running: python3 -m pytest srv6/test_srv6_basic_sanity.py --inventory ../ansible/veos_vtb --host-pattern vlab-c-01 --testbed vms-kvm-ciscovs-7nodes --testbed_file vtestbed.yaml --log-cli-level warning --log-file-level debug --kube_master unset --showlocals --assert plain --show-capture no -rav --allow_recover --ignore=ptftests --ignore=acstests --ignore=saitests --ignore=scripts --ignore=k8s --ignore=sai_qualify --junit-xml=logs/tr.xml --log-file=logs/test.log --skip_sanity --disable_loganalyzer --neighbor_type=sonic
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.3, pluggy-1.0.0
ansible: 2.9.27
rootdir: /data/sonic-mgmt/tests, configfile: pytest.ini
plugins: forked-1.6.0, allure-pytest-2.8.22, xdist-1.28.0, html-3.2.0, ansible-2.2.4, repeat-0.9.1, metadata-2.0.4, celery-4.4.7

----------------------------- live log collection ------------------------------
09:10:22 __init__.load_dut_basic_facts            L0155 ERROR  | Failed to load dut basic facts, exception: CalledProcessError(4, ['ansible', '-m', 'dut_basic_facts', '-i', '/data/sonic-mgmt/tests/common/plugins/conditional_mark/../../../../ansible/veos_vtb', 'vlab-c-01', '-o'])
09:10:27 __init__.load_minigraph_facts            L0245 ERROR  | Failed to load minigraph basic facts, exception: CalledProcessError(4, ['ansible', '-m', 'minigraph_facts', '-i', '../ansible/veos_vtb', 'vlab-c-01', '-a', 'host=vlab-c-01'])
09:10:31 __init__.load_config_facts               L0277 ERROR  | Failed to load config basic facts, exception: CalledProcessError(4, ['ansible', '-m', 'config_facts', '-i', '../ansible/veos_vtb', 'vlab-c-01', '-a', "host=vlab-c-01 source='persistent'"])
09:10:35 __init__.load_switch_capabilities_facts  L0306 ERROR  | Failed to load switch capabilities basic facts, exception: CalledProcessError(4, ['ansible', '-m', 'switch_capabilities_facts', '-i', '../ansible/veos_vtb', 'vlab-c-01'])
09:10:40 __init__.load_config_facts               L0277 ERROR  | Failed to load config basic facts, exception: CalledProcessError(4, ['ansible', '-m', 'config_facts', '-i', '../ansible/veos_vtb', 'vlab-c-01', '-a', "host=vlab-c-01 source='persistent'"])
collected 9 items

srv6/test_srv6_basic_sanity.py::test_interface_on_each_node 
-------------------------------- live log setup --------------------------------
09:10:44 conftest.fixture_duthosts                L0366 ERROR  | Failed to initialize duthosts.
ERROR                                                                    [ 11%]
srv6/test_srv6_basic_sanity.py::test_check_bgp_neighbors ERROR           [ 22%]
srv6/test_srv6_basic_sanity.py::test_check_routes ERROR                  [ 33%]
srv6/test_srv6_basic_sanity.py::test_traffic_check_via_trex ERROR        [ 44%]
srv6/test_srv6_basic_sanity.py::test_traffic_check_via_ptf ERROR         [ 55%]
srv6/test_srv6_basic_sanity.py::test_traffic_check_local_link_fail_case ERROR [ 66%]
srv6/test_srv6_basic_sanity.py::test_traffic_check_remote_igp_fail_case ERROR [ 77%]
srv6/test_srv6_basic_sanity.py::test_traffic_check_remote_bgp_fail_case ERROR [ 88%]
srv6/test_srv6_basic_sanity.py::test_sbfd_functions SKIPPED (This te...) [100%]

==================================== ERRORS ====================================
________________ ERROR at setup of test_interface_on_each_node _________________

enhance_inventory = None
ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
tbinfo = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}
request = <SubRequest 'duthosts' for <Function test_interface_on_each_node>>

    @pytest.fixture(name="duthosts", scope="session")
    def fixture_duthosts(enhance_inventory, ansible_adhoc, tbinfo, request):
        """
        @summary: fixture to get DUT hosts defined in testbed.
        @param ansible_adhoc: Fixture provided by the pytest-ansible package.
            Source of the various device objects. It is
            mandatory argument for the class constructors.
        @param tbinfo: fixture provides information about testbed.
        """
        try:
>           host = DutHosts(ansible_adhoc, tbinfo, get_specified_duts(request))

ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
enhance_inventory = None
request    = <SubRequest 'duthosts' for <Function test_interface_on_each_node>>
tbinfo     = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}

conftest.py:363: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <[RecursionError('maximum recursion depth exceeded') raised in repr()] DutHosts object at 0x7f745363d550>
ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
tbinfo = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}
duts = ['vlab-c-01']

    def __init__(self, ansible_adhoc, tbinfo, duts):
        """ Initialize a multi-dut testbed with all the DUT's defined in testbed info.
    
        Args:
            ansible_adhoc: The pytest-ansible fixture
            tbinfo - Testbed info whose "duts" holds the hostnames for the DUT's in the multi-dut testbed.
            duts - list of DUT hostnames from the `--host-pattern` CLI option. Can be specified if only a subset of
                   DUTs in the testbed should be used
    
        """
        self.ansible_adhoc = ansible_adhoc
        self.tbinfo = tbinfo
        self.duts = duts
>       self.__initialize_nodes()

ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
duts       = ['vlab-c-01']
self       = <[RecursionError('maximum recursion depth exceeded') raised in repr()] DutHosts object at 0x7f745363d550>
tbinfo     = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}

common/devices/duthosts.py:60: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <[RecursionError('maximum recursion depth exceeded') raised in repr()] DutHosts object at 0x7f745363d550>

    def __initialize_nodes(self):
        # TODO: Initialize the nodes in parallel using multi-threads?
>       self.nodes = self._Nodes([MultiAsicSonicHost(self.ansible_adhoc, hostname, self, self.tbinfo['topo']['type'])
                                  for hostname in self.tbinfo["duts"] if hostname in self.duts])

self       = <[RecursionError('maximum recursion depth exceeded') raised in repr()] DutHosts object at 0x7f745363d550>

common/devices/duthosts.py:64: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

.0 = <list_iterator object at 0x7f745363d580>

>   self.nodes = self._Nodes([MultiAsicSonicHost(self.ansible_adhoc, hostname, self, self.tbinfo['topo']['type'])
                              for hostname in self.tbinfo["duts"] if hostname in self.duts])

.0         = <list_iterator object at 0x7f745363d580>
hostname   = 'vlab-c-01'
self       = <[RecursionError('maximum recursion depth exceeded') raised in repr()] DutHosts object at 0x7f745363d550>

common/devices/duthosts.py:64: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <[RecursionError('maximum recursion depth exceeded while calling a Python object') raised in repr()] MultiAsicSonicHost object at 0x7f745363df70>
ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
hostname = 'vlab-c-01'
duthosts = <[RecursionError('maximum recursion depth exceeded') raised in repr()] DutHosts object at 0x7f745363d550>
topo_type = 'ciscovs'

    def __init__(self, ansible_adhoc, hostname, duthosts, topo_type):
        """ Initializing a MultiAsicSonicHost.
    
        Args:
            ansible_adhoc : The pytest-ansible fixture
            hostname: Name of the host in the ansible inventory
        """
        self.duthosts = duthosts
        self.topo_type = topo_type
        self.loganalyzer = None
>       self.sonichost = SonicHost(ansible_adhoc, hostname)

ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
duthosts   = <[RecursionError('maximum recursion depth exceeded') raised in repr()] DutHosts object at 0x7f745363d550>
hostname   = 'vlab-c-01'
self       = <[RecursionError('maximum recursion depth exceeded while calling a Python object') raised in repr()] MultiAsicSonicHost object at 0x7f745363df70>
topo_type  = 'ciscovs'

common/devices/multi_asic.py:36: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <SonicHost vlab-c-01>
ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
hostname = 'vlab-c-01', shell_user = None, shell_passwd = None, ssh_user = None
ssh_passwd = None

    def __init__(self, ansible_adhoc, hostname,
                 shell_user=None, shell_passwd=None,
                 ssh_user=None, ssh_passwd=None):
        AnsibleHostBase.__init__(self, ansible_adhoc, hostname)
    
        self.DEFAULT_ASIC_SERVICES = ["bgp", "database", "lldp", "swss", "syncd", "teamd"]
    
        if shell_user and shell_passwd:
            im = self.host.options['inventory_manager']
            vm = self.host.options['variable_manager']
            sonic_conn = vm.get_vars(
                host=im.get_hosts(pattern='sonic')[0]
            )['ansible_connection']
            hostvars = vm.get_vars(host=im.get_host(hostname=self.hostname))
            # parse connection options and reset those options with
            # passed credentials
            connection_loader.get(sonic_conn, class_only=True)
            user_def = ansible_constants.config.get_configuration_definition(
                "remote_user", "connection", sonic_conn
            )
            pass_def = ansible_constants.config.get_configuration_definition(
                "password", "connection", sonic_conn
            )
            for user_var in (_['name'] for _ in user_def['vars']):
                if user_var in hostvars:
                    vm.extra_vars.update({user_var: shell_user})
            for pass_var in (_['name'] for _ in pass_def['vars']):
                if pass_var in hostvars:
                    vm.extra_vars.update({pass_var: shell_passwd})
    
        if ssh_user and ssh_passwd:
            evars = {
                'ansible_ssh_user': ssh_user,
                'ansible_ssh_pass': ssh_passwd,
            }
            self.host.options['variable_manager'].extra_vars.update(evars)
    
>       self._facts = self._gather_facts()

ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
hostname   = 'vlab-c-01'
self       = <SonicHost vlab-c-01>
shell_passwd = None
shell_user = None
ssh_passwd = None
ssh_user   = None

common/devices/sonic.py:86: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

args = (<SonicHost vlab-c-01>,), kargs = {}
_zone_getter = <function _get_default_zone at 0x7f74558a1550>
zone = 'vlab-c-01', cached_facts = <object object at 0x7f7459cf5770>

    def wrapper(*args, **kargs):
        _zone_getter = zone_getter or _get_default_zone
        zone = _zone_getter(target, args, kargs)
    
        cached_facts = cache.read(zone, name)
        if after_read:
            cached_facts = after_read(cached_facts, target, args, kargs)
        if cached_facts is not FactsCache.NOTEXIST:
            return cached_facts
        else:
>           facts = target(*args, **kargs)

_zone_getter = <function _get_default_zone at 0x7f74558a1550>
after_read = None
args       = (<SonicHost vlab-c-01>,)
before_write = None
cache      = <tests.common.cache.facts_cache.FactsCache object at 0x7f74568f7580>
cached_facts = <object object at 0x7f7459cf5770>
kargs      = {}
name       = 'basic_facts'
target     = <function SonicHost._gather_facts at 0x7f7453c98550>
zone       = 'vlab-c-01'
zone_getter = None

common/cache/facts_cache.py:228: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <SonicHost vlab-c-01>

    @cached(name='basic_facts')
    def _gather_facts(self):
        """
        Gather facts about the platform for this SONiC device.
        """
>       facts = self._get_platform_info()

self       = <SonicHost vlab-c-01>

common/devices/sonic.py:199: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <SonicHost vlab-c-01>

    def _get_platform_info(self):
        """
        Gets platform information about this SONiC device.
        """
    
>       platform_info = self.command("show platform summary")["stdout_lines"]

self       = <SonicHost vlab-c-01>

common/devices/sonic.py:311: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <SonicHost vlab-c-01>, module_args = ['show platform summary']
complex_args = {}
previous_frame = <frame at 0x7f74536ff840, file '/data/sonic-mgmt/tests/common/devices/sonic.py', line 311, code _get_platform_info>
filename = '/data/sonic-mgmt/tests/common/devices/sonic.py', line_number = 311
function_name = '_get_platform_info'
lines = ['        platform_info = self.command("show platform summary")["stdout_lines"]\n']
index = 0, verbose = True, module_ignore_errors = False, module_async = False

    def _run(self, *module_args, **complex_args):
    
        previous_frame = inspect.currentframe().f_back
        filename, line_number, function_name, lines, index = inspect.getframeinfo(previous_frame)
    
        verbose = complex_args.pop('verbose', True)
    
        if verbose:
            logger.debug(
                "{}::{}#{}: [{}] AnsibleModule::{}, args={}, kwargs={}".format(
                    filename,
                    function_name,
                    line_number,
                    self.hostname,
                    self.module_name,
                    json.dumps(module_args, cls=AnsibleHostBase.CustomEncoder),
                    json.dumps(complex_args, cls=AnsibleHostBase.CustomEncoder)
                )
            )
        else:
            logger.debug(
                "{}::{}#{}: [{}] AnsibleModule::{} executing...".format(
                    filename,
                    function_name,
                    line_number,
                    self.hostname,
                    self.module_name
                )
            )
    
        module_ignore_errors = complex_args.pop('module_ignore_errors', False)
        module_async = complex_args.pop('module_async', False)
    
        if module_async:
            def run_module(module_args, complex_args):
                return self.module(*module_args, **complex_args)[self.hostname]
            pool = ThreadPool()
            result = pool.apply_async(run_module, (module_args, complex_args))
            return pool, result
    
        module_args = json.loads(json.dumps(module_args, cls=AnsibleHostBase.CustomEncoder))
        complex_args = json.loads(json.dumps(complex_args, cls=AnsibleHostBase.CustomEncoder))
>       res = self.module(*module_args, **complex_args)[self.hostname]

complex_args = {}
filename   = '/data/sonic-mgmt/tests/common/devices/sonic.py'
function_name = '_get_platform_info'
index      = 0
line_number = 311
lines      = ['        platform_info = self.command("show platform summary")["stdout_lines"]\n']
module_args = ['show platform summary']
module_async = False
module_ignore_errors = False
previous_frame = <frame at 0x7f74536ff840, file '/data/sonic-mgmt/tests/common/devices/sonic.py', line 311, code _get_platform_info>
self       = <SonicHost vlab-c-01>
verbose    = True

common/devices/base.py:105: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <pytest_ansible.module_dispatcher.v28.ModuleDispatcherV28 object at 0x7f745372aeb0>
module_args = ('show platform summary',)
complex_args = {'_raw_params': 'show platform summary'}, hosts = [vlab-c-01]
no_hosts = False
args = ['pytest-ansible', 'vlab-c-01', '--connection=smart', '--become', '--become-method=sudo', '--become-user=root', ...]
verbosity = None, verbosity_syntax = '-vvvvv', argument = 'module-path'
arg_value = ['/data/sonic-mgmt/ansible/library']
cb = <pytest_ansible.module_dispatcher.v28.ResultAccumulator object at 0x7f7455a784f0>
kwargs = {'inventory': <ansible.inventory.manager.InventoryManager object at 0x7f745363d5e0>, 'loader': <ansible.parsing.datalo...ass': None}, 'stdout_callback': <pytest_ansible.module_dispatcher.v28.ResultAccumulator object at 0x7f7455a784f0>, ...}

    def _run(self, *module_args, **complex_args):
        """Execute an ansible adhoc command returning the result in a AdhocResult object."""
        # Assemble module argument string
        if module_args:
            complex_args.update(dict(_raw_params=' '.join(module_args)))
    
        # Assert hosts matching the provided pattern exist
        hosts = self.options['inventory_manager'].list_hosts()
        no_hosts = False
        if len(hosts) == 0:
            no_hosts = True
            warnings.warn("provided hosts list is empty, only localhost is available")
    
        self.options['inventory_manager'].subset(self.options.get('subset'))
        hosts = self.options['inventory_manager'].list_hosts(self.options['host_pattern'])
        if len(hosts) == 0 and not no_hosts:
            raise ansible.errors.AnsibleError("Specified hosts and/or --limit does not match any hosts")
    
        # Pass along cli options
        args = ['pytest-ansible']
        verbosity = None
        for verbosity_syntax in ('-v', '-vv', '-vvv', '-vvvv', '-vvvvv'):
            if verbosity_syntax in sys.argv:
                verbosity = verbosity_syntax
                break
        if verbosity is not None:
            args.append(verbosity_syntax)
        args.extend([self.options['host_pattern']])
        for argument in ('connection', 'user', 'become', 'become_method', 'become_user', 'module_path'):
            arg_value = self.options.get(argument)
            argument = argument.replace('_', '-')
    
            if arg_value in (None, False):
                continue
    
            if arg_value is True:
                args.append('--{0}'.format(argument))
            else:
                args.append('--{0}={1}'.format(argument, arg_value))
    
        # Use Ansible's own adhoc cli to parse the fake command line we created and then save it
        # into Ansible's global context
        adhoc = AdHocCLI(args)
        adhoc.parse()
    
        # And now we'll never speak of this again
        del adhoc
    
        # Initialize callback to capture module JSON responses
        cb = ResultAccumulator()
    
        kwargs = dict(
            inventory=self.options['inventory_manager'],
            variable_manager=self.options['variable_manager'],
            loader=self.options['loader'],
            stdout_callback=cb,
            passwords=dict(conn_pass=None, become_pass=None),
        )
    
        # create a pseudo-play to execute the specified module via a single task
        play_ds = dict(
            name="pytest-ansible",
            hosts=self.options['host_pattern'],
            become=self.options.get('become'),
            become_user=self.options.get('become_user'),
            gather_facts='no',
            tasks=[
                dict(
                    action=dict(
                        module=self.options['module_name'], args=complex_args
                    ),
                ),
            ]
        )
        play = Play().load(play_ds, variable_manager=self.options['variable_manager'], loader=self.options['loader'])
    
        # now create a task queue manager to execute the play
        tqm = None
        try:
            tqm = TaskQueueManager(**kwargs)
            tqm.run(play)
        finally:
            if tqm:
                tqm.cleanup()
    
    
        # Raise exception if host(s) unreachable
        # FIXME - if multiple hosts were involved, should an exception be raised?
        if cb.unreachable:
>           raise AnsibleConnectionFailure("Host unreachable", dark=cb.unreachable, contacted=cb.contacted)
E           pytest_ansible.errors.AnsibleConnectionFailure: Host unreachable

arg_value  = ['/data/sonic-mgmt/ansible/library']
args       = ['pytest-ansible', 'vlab-c-01', '--connection=smart', '--become', '--become-method=sudo', '--become-user=root', ...]
argument   = 'module-path'
cb         = <pytest_ansible.module_dispatcher.v28.ResultAccumulator object at 0x7f7455a784f0>
complex_args = {'_raw_params': 'show platform summary'}
hosts      = [vlab-c-01]
kwargs     = {'inventory': <ansible.inventory.manager.InventoryManager object at 0x7f745363d5e0>, 'loader': <ansible.parsing.datalo...ass': None}, 'stdout_callback': <pytest_ansible.module_dispatcher.v28.ResultAccumulator object at 0x7f7455a784f0>, ...}
module_args = ('show platform summary',)
no_hosts   = False
play       = pytest-ansible
play_ds    = {'become': True, 'become_user': 'root', 'gather_facts': 'no', 'hosts': 'vlab-c-01', ...}
self       = <pytest_ansible.module_dispatcher.v28.ModuleDispatcherV28 object at 0x7f745372aeb0>
tqm        = <ansible.executor.task_queue_manager.TaskQueueManager object at 0x7f74536b72e0>
verbosity  = None
verbosity_syntax = '-vvvvv'

/home/ubuntu/env-python3/lib/python3.8/site-packages/pytest_ansible/module_dispatcher/v28.py:159: AnsibleConnectionFailure

During handling of the above exception, another exception occurred:

enhance_inventory = None
ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
tbinfo = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}
request = <SubRequest 'duthosts' for <Function test_interface_on_each_node>>

    @pytest.fixture(name="duthosts", scope="session")
    def fixture_duthosts(enhance_inventory, ansible_adhoc, tbinfo, request):
        """
        @summary: fixture to get DUT hosts defined in testbed.
        @param ansible_adhoc: Fixture provided by the pytest-ansible package.
            Source of the various device objects. It is
            mandatory argument for the class constructors.
        @param tbinfo: fixture provides information about testbed.
        """
        try:
            host = DutHosts(ansible_adhoc, tbinfo, get_specified_duts(request))
            return host
        except BaseException as e:
            logger.error("Failed to initialize duthosts.")
            request.config.cache.set("duthosts_fixture_failed", True)
>           pt_assert(False, "!!!!!!!!!!!!!!!! duthosts fixture failed !!!!!!!!!!!!!!!!"
                      "Exception: {}".format(repr(e)))
E           Failed: !!!!!!!!!!!!!!!! duthosts fixture failed !!!!!!!!!!!!!!!!Exception: Host unreachable

ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
enhance_inventory = None
request    = <SubRequest 'duthosts' for <Function test_interface_on_each_node>>
tbinfo     = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}

conftest.py:368: Failed
__________________ ERROR at setup of test_check_bgp_neighbors __________________

enhance_inventory = None
ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
tbinfo = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}
request = <SubRequest 'duthosts' for <Function test_interface_on_each_node>>

    @pytest.fixture(name="duthosts", scope="session")
    def fixture_duthosts(enhance_inventory, ansible_adhoc, tbinfo, request):
        """
        @summary: fixture to get DUT hosts defined in testbed.
        @param ansible_adhoc: Fixture provided by the pytest-ansible package.
            Source of the various device objects. It is
            mandatory argument for the class constructors.
        @param tbinfo: fixture provides information about testbed.
        """
        try:
>           host = DutHosts(ansible_adhoc, tbinfo, get_specified_duts(request))

ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
enhance_inventory = None
request    = <SubRequest 'duthosts' for <Function test_interface_on_each_node>>
tbinfo     = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}

conftest.py:363: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <[RecursionError('maximum recursion depth exceeded') raised in repr()] DutHosts object at 0x7f745363d550>
ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
tbinfo = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}
duts = ['vlab-c-01']

    def __init__(self, ansible_adhoc, tbinfo, duts):
        """ Initialize a multi-dut testbed with all the DUT's defined in testbed info.
    
        Args:
            ansible_adhoc: The pytest-ansible fixture
            tbinfo - Testbed info whose "duts" holds the hostnames for the DUT's in the multi-dut testbed.
            duts - list of DUT hostnames from the `--host-pattern` CLI option. Can be specified if only a subset of
                   DUTs in the testbed should be used
    
        """
        self.ansible_adhoc = ansible_adhoc
        self.tbinfo = tbinfo
        self.duts = duts
>       self.__initialize_nodes()

ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
duts       = ['vlab-c-01']
self       = <[RecursionError('maximum recursion depth exceeded') raised in repr()] DutHosts object at 0x7f745363d550>
tbinfo     = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}

common/devices/duthosts.py:60: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <[RecursionError('maximum recursion depth exceeded') raised in repr()] DutHosts object at 0x7f745363d550>

    def __initialize_nodes(self):
        # TODO: Initialize the nodes in parallel using multi-threads?
>       self.nodes = self._Nodes([MultiAsicSonicHost(self.ansible_adhoc, hostname, self, self.tbinfo['topo']['type'])
                                  for hostname in self.tbinfo["duts"] if hostname in self.duts])

self       = <[RecursionError('maximum recursion depth exceeded') raised in repr()] DutHosts object at 0x7f745363d550>

common/devices/duthosts.py:64: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

.0 = <list_iterator object at 0x7f745363d580>

>   self.nodes = self._Nodes([MultiAsicSonicHost(self.ansible_adhoc, hostname, self, self.tbinfo['topo']['type'])
                              for hostname in self.tbinfo["duts"] if hostname in self.duts])

.0         = <list_iterator object at 0x7f745363d580>
hostname   = 'vlab-c-01'
self       = <[RecursionError('maximum recursion depth exceeded') raised in repr()] DutHosts object at 0x7f745363d550>

common/devices/duthosts.py:64: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <[RecursionError('maximum recursion depth exceeded while calling a Python object') raised in repr()] MultiAsicSonicHost object at 0x7f745363df70>
ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
hostname = 'vlab-c-01'
duthosts = <[RecursionError('maximum recursion depth exceeded') raised in repr()] DutHosts object at 0x7f745363d550>
topo_type = 'ciscovs'

    def __init__(self, ansible_adhoc, hostname, duthosts, topo_type):
        """ Initializing a MultiAsicSonicHost.
    
        Args:
            ansible_adhoc : The pytest-ansible fixture
            hostname: Name of the host in the ansible inventory
        """
        self.duthosts = duthosts
        self.topo_type = topo_type
        self.loganalyzer = None
>       self.sonichost = SonicHost(ansible_adhoc, hostname)

ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
duthosts   = <[RecursionError('maximum recursion depth exceeded') raised in repr()] DutHosts object at 0x7f745363d550>
hostname   = 'vlab-c-01'
self       = <[RecursionError('maximum recursion depth exceeded while calling a Python object') raised in repr()] MultiAsicSonicHost object at 0x7f745363df70>
topo_type  = 'ciscovs'

common/devices/multi_asic.py:36: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <SonicHost vlab-c-01>
ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
hostname = 'vlab-c-01', shell_user = None, shell_passwd = None, ssh_user = None
ssh_passwd = None

    def __init__(self, ansible_adhoc, hostname,
                 shell_user=None, shell_passwd=None,
                 ssh_user=None, ssh_passwd=None):
        AnsibleHostBase.__init__(self, ansible_adhoc, hostname)
    
        self.DEFAULT_ASIC_SERVICES = ["bgp", "database", "lldp", "swss", "syncd", "teamd"]
    
        if shell_user and shell_passwd:
            im = self.host.options['inventory_manager']
            vm = self.host.options['variable_manager']
            sonic_conn = vm.get_vars(
                host=im.get_hosts(pattern='sonic')[0]
            )['ansible_connection']
            hostvars = vm.get_vars(host=im.get_host(hostname=self.hostname))
            # parse connection options and reset those options with
            # passed credentials
            connection_loader.get(sonic_conn, class_only=True)
            user_def = ansible_constants.config.get_configuration_definition(
                "remote_user", "connection", sonic_conn
            )
            pass_def = ansible_constants.config.get_configuration_definition(
                "password", "connection", sonic_conn
            )
            for user_var in (_['name'] for _ in user_def['vars']):
                if user_var in hostvars:
                    vm.extra_vars.update({user_var: shell_user})
            for pass_var in (_['name'] for _ in pass_def['vars']):
                if pass_var in hostvars:
                    vm.extra_vars.update({pass_var: shell_passwd})
    
        if ssh_user and ssh_passwd:
            evars = {
                'ansible_ssh_user': ssh_user,
                'ansible_ssh_pass': ssh_passwd,
            }
            self.host.options['variable_manager'].extra_vars.update(evars)
    
>       self._facts = self._gather_facts()

ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
hostname   = 'vlab-c-01'
self       = <SonicHost vlab-c-01>
shell_passwd = None
shell_user = None
ssh_passwd = None
ssh_user   = None

common/devices/sonic.py:86: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

args = (<SonicHost vlab-c-01>,), kargs = {}
_zone_getter = <function _get_default_zone at 0x7f74558a1550>
zone = 'vlab-c-01', cached_facts = <object object at 0x7f7459cf5770>

    def wrapper(*args, **kargs):
        _zone_getter = zone_getter or _get_default_zone
        zone = _zone_getter(target, args, kargs)
    
        cached_facts = cache.read(zone, name)
        if after_read:
            cached_facts = after_read(cached_facts, target, args, kargs)
        if cached_facts is not FactsCache.NOTEXIST:
            return cached_facts
        else:
>           facts = target(*args, **kargs)

_zone_getter = <function _get_default_zone at 0x7f74558a1550>
after_read = None
args       = (<SonicHost vlab-c-01>,)
before_write = None
cache      = <tests.common.cache.facts_cache.FactsCache object at 0x7f74568f7580>
cached_facts = <object object at 0x7f7459cf5770>
kargs      = {}
name       = 'basic_facts'
target     = <function SonicHost._gather_facts at 0x7f7453c98550>
zone       = 'vlab-c-01'
zone_getter = None

common/cache/facts_cache.py:228: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <SonicHost vlab-c-01>

    @cached(name='basic_facts')
    def _gather_facts(self):
        """
        Gather facts about the platform for this SONiC device.
        """
>       facts = self._get_platform_info()

self       = <SonicHost vlab-c-01>

common/devices/sonic.py:199: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <SonicHost vlab-c-01>

    def _get_platform_info(self):
        """
        Gets platform information about this SONiC device.
        """
    
>       platform_info = self.command("show platform summary")["stdout_lines"]

self       = <SonicHost vlab-c-01>

common/devices/sonic.py:311: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <SonicHost vlab-c-01>, module_args = ['show platform summary']
complex_args = {}
previous_frame = <frame at 0x7f74536ff840, file '/data/sonic-mgmt/tests/common/devices/sonic.py', line 311, code _get_platform_info>
filename = '/data/sonic-mgmt/tests/common/devices/sonic.py', line_number = 311
function_name = '_get_platform_info'
lines = ['        platform_info = self.command("show platform summary")["stdout_lines"]\n']
index = 0, verbose = True, module_ignore_errors = False, module_async = False

    def _run(self, *module_args, **complex_args):
    
        previous_frame = inspect.currentframe().f_back
        filename, line_number, function_name, lines, index = inspect.getframeinfo(previous_frame)
    
        verbose = complex_args.pop('verbose', True)
    
        if verbose:
            logger.debug(
                "{}::{}#{}: [{}] AnsibleModule::{}, args={}, kwargs={}".format(
                    filename,
                    function_name,
                    line_number,
                    self.hostname,
                    self.module_name,
                    json.dumps(module_args, cls=AnsibleHostBase.CustomEncoder),
                    json.dumps(complex_args, cls=AnsibleHostBase.CustomEncoder)
                )
            )
        else:
            logger.debug(
                "{}::{}#{}: [{}] AnsibleModule::{} executing...".format(
                    filename,
                    function_name,
                    line_number,
                    self.hostname,
                    self.module_name
                )
            )
    
        module_ignore_errors = complex_args.pop('module_ignore_errors', False)
        module_async = complex_args.pop('module_async', False)
    
        if module_async:
            def run_module(module_args, complex_args):
                return self.module(*module_args, **complex_args)[self.hostname]
            pool = ThreadPool()
            result = pool.apply_async(run_module, (module_args, complex_args))
            return pool, result
    
        module_args = json.loads(json.dumps(module_args, cls=AnsibleHostBase.CustomEncoder))
        complex_args = json.loads(json.dumps(complex_args, cls=AnsibleHostBase.CustomEncoder))
>       res = self.module(*module_args, **complex_args)[self.hostname]

complex_args = {}
filename   = '/data/sonic-mgmt/tests/common/devices/sonic.py'
function_name = '_get_platform_info'
index      = 0
line_number = 311
lines      = ['        platform_info = self.command("show platform summary")["stdout_lines"]\n']
module_args = ['show platform summary']
module_async = False
module_ignore_errors = False
previous_frame = <frame at 0x7f74536ff840, file '/data/sonic-mgmt/tests/common/devices/sonic.py', line 311, code _get_platform_info>
self       = <SonicHost vlab-c-01>
verbose    = True

common/devices/base.py:105: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <pytest_ansible.module_dispatcher.v28.ModuleDispatcherV28 object at 0x7f745372aeb0>
module_args = ('show platform summary',)
complex_args = {'_raw_params': 'show platform summary'}, hosts = [vlab-c-01]
no_hosts = False
args = ['pytest-ansible', 'vlab-c-01', '--connection=smart', '--become', '--become-method=sudo', '--become-user=root', ...]
verbosity = None, verbosity_syntax = '-vvvvv', argument = 'module-path'
arg_value = ['/data/sonic-mgmt/ansible/library']
cb = <pytest_ansible.module_dispatcher.v28.ResultAccumulator object at 0x7f7455a784f0>
kwargs = {'inventory': <ansible.inventory.manager.InventoryManager object at 0x7f745363d5e0>, 'loader': <ansible.parsing.datalo...ass': None}, 'stdout_callback': <pytest_ansible.module_dispatcher.v28.ResultAccumulator object at 0x7f7455a784f0>, ...}

    def _run(self, *module_args, **complex_args):
        """Execute an ansible adhoc command returning the result in a AdhocResult object."""
        # Assemble module argument string
        if module_args:
            complex_args.update(dict(_raw_params=' '.join(module_args)))
    
        # Assert hosts matching the provided pattern exist
        hosts = self.options['inventory_manager'].list_hosts()
        no_hosts = False
        if len(hosts) == 0:
            no_hosts = True
            warnings.warn("provided hosts list is empty, only localhost is available")
    
        self.options['inventory_manager'].subset(self.options.get('subset'))
        hosts = self.options['inventory_manager'].list_hosts(self.options['host_pattern'])
        if len(hosts) == 0 and not no_hosts:
            raise ansible.errors.AnsibleError("Specified hosts and/or --limit does not match any hosts")
    
        # Pass along cli options
        args = ['pytest-ansible']
        verbosity = None
        for verbosity_syntax in ('-v', '-vv', '-vvv', '-vvvv', '-vvvvv'):
            if verbosity_syntax in sys.argv:
                verbosity = verbosity_syntax
                break
        if verbosity is not None:
            args.append(verbosity_syntax)
        args.extend([self.options['host_pattern']])
        for argument in ('connection', 'user', 'become', 'become_method', 'become_user', 'module_path'):
            arg_value = self.options.get(argument)
            argument = argument.replace('_', '-')
    
            if arg_value in (None, False):
                continue
    
            if arg_value is True:
                args.append('--{0}'.format(argument))
            else:
                args.append('--{0}={1}'.format(argument, arg_value))
    
        # Use Ansible's own adhoc cli to parse the fake command line we created and then save it
        # into Ansible's global context
        adhoc = AdHocCLI(args)
        adhoc.parse()
    
        # And now we'll never speak of this again
        del adhoc
    
        # Initialize callback to capture module JSON responses
        cb = ResultAccumulator()
    
        kwargs = dict(
            inventory=self.options['inventory_manager'],
            variable_manager=self.options['variable_manager'],
            loader=self.options['loader'],
            stdout_callback=cb,
            passwords=dict(conn_pass=None, become_pass=None),
        )
    
        # create a pseudo-play to execute the specified module via a single task
        play_ds = dict(
            name="pytest-ansible",
            hosts=self.options['host_pattern'],
            become=self.options.get('become'),
            become_user=self.options.get('become_user'),
            gather_facts='no',
            tasks=[
                dict(
                    action=dict(
                        module=self.options['module_name'], args=complex_args
                    ),
                ),
            ]
        )
        play = Play().load(play_ds, variable_manager=self.options['variable_manager'], loader=self.options['loader'])
    
        # now create a task queue manager to execute the play
        tqm = None
        try:
            tqm = TaskQueueManager(**kwargs)
            tqm.run(play)
        finally:
            if tqm:
                tqm.cleanup()
    
    
        # Raise exception if host(s) unreachable
        # FIXME - if multiple hosts were involved, should an exception be raised?
        if cb.unreachable:
>           raise AnsibleConnectionFailure("Host unreachable", dark=cb.unreachable, contacted=cb.contacted)
E           pytest_ansible.errors.AnsibleConnectionFailure: Host unreachable

arg_value  = ['/data/sonic-mgmt/ansible/library']
args       = ['pytest-ansible', 'vlab-c-01', '--connection=smart', '--become', '--become-method=sudo', '--become-user=root', ...]
argument   = 'module-path'
cb         = <pytest_ansible.module_dispatcher.v28.ResultAccumulator object at 0x7f7455a784f0>
complex_args = {'_raw_params': 'show platform summary'}
hosts      = [vlab-c-01]
kwargs     = {'inventory': <ansible.inventory.manager.InventoryManager object at 0x7f745363d5e0>, 'loader': <ansible.parsing.datalo...ass': None}, 'stdout_callback': <pytest_ansible.module_dispatcher.v28.ResultAccumulator object at 0x7f7455a784f0>, ...}
module_args = ('show platform summary',)
no_hosts   = False
play       = pytest-ansible
play_ds    = {'become': True, 'become_user': 'root', 'gather_facts': 'no', 'hosts': 'vlab-c-01', ...}
self       = <pytest_ansible.module_dispatcher.v28.ModuleDispatcherV28 object at 0x7f745372aeb0>
tqm        = <ansible.executor.task_queue_manager.TaskQueueManager object at 0x7f74536b72e0>
verbosity  = None
verbosity_syntax = '-vvvvv'

/home/ubuntu/env-python3/lib/python3.8/site-packages/pytest_ansible/module_dispatcher/v28.py:159: AnsibleConnectionFailure

During handling of the above exception, another exception occurred:

enhance_inventory = None
ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
tbinfo = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}
request = <SubRequest 'duthosts' for <Function test_interface_on_each_node>>

    @pytest.fixture(name="duthosts", scope="session")
    def fixture_duthosts(enhance_inventory, ansible_adhoc, tbinfo, request):
        """
        @summary: fixture to get DUT hosts defined in testbed.
        @param ansible_adhoc: Fixture provided by the pytest-ansible package.
            Source of the various device objects. It is
            mandatory argument for the class constructors.
        @param tbinfo: fixture provides information about testbed.
        """
        try:
            host = DutHosts(ansible_adhoc, tbinfo, get_specified_duts(request))
            return host
        except BaseException as e:
            logger.error("Failed to initialize duthosts.")
            request.config.cache.set("duthosts_fixture_failed", True)
>           pt_assert(False, "!!!!!!!!!!!!!!!! duthosts fixture failed !!!!!!!!!!!!!!!!"
                      "Exception: {}".format(repr(e)))
E           Failed: !!!!!!!!!!!!!!!! duthosts fixture failed !!!!!!!!!!!!!!!!Exception: Host unreachable

ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
enhance_inventory = None
request    = <SubRequest 'duthosts' for <Function test_interface_on_each_node>>
tbinfo     = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}

conftest.py:368: Failed
_____________________ ERROR at setup of test_check_routes ______________________

enhance_inventory = None
ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
tbinfo = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}
request = <SubRequest 'duthosts' for <Function test_interface_on_each_node>>

    @pytest.fixture(name="duthosts", scope="session")
    def fixture_duthosts(enhance_inventory, ansible_adhoc, tbinfo, request):
        """
        @summary: fixture to get DUT hosts defined in testbed.
        @param ansible_adhoc: Fixture provided by the pytest-ansible package.
            Source of the various device objects. It is
            mandatory argument for the class constructors.
        @param tbinfo: fixture provides information about testbed.
        """
        try:
>           host = DutHosts(ansible_adhoc, tbinfo, get_specified_duts(request))

ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
enhance_inventory = None
request    = <SubRequest 'duthosts' for <Function test_interface_on_each_node>>
tbinfo     = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}

conftest.py:363: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <[RecursionError('maximum recursion depth exceeded') raised in repr()] DutHosts object at 0x7f745363d550>
ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
tbinfo = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}
duts = ['vlab-c-01']

    def __init__(self, ansible_adhoc, tbinfo, duts):
        """ Initialize a multi-dut testbed with all the DUT's defined in testbed info.
    
        Args:
            ansible_adhoc: The pytest-ansible fixture
            tbinfo - Testbed info whose "duts" holds the hostnames for the DUT's in the multi-dut testbed.
            duts - list of DUT hostnames from the `--host-pattern` CLI option. Can be specified if only a subset of
                   DUTs in the testbed should be used
    
        """
        self.ansible_adhoc = ansible_adhoc
        self.tbinfo = tbinfo
        self.duts = duts
>       self.__initialize_nodes()

ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
duts       = ['vlab-c-01']
self       = <[RecursionError('maximum recursion depth exceeded') raised in repr()] DutHosts object at 0x7f745363d550>
tbinfo     = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}

common/devices/duthosts.py:60: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <[RecursionError('maximum recursion depth exceeded') raised in repr()] DutHosts object at 0x7f745363d550>

    def __initialize_nodes(self):
        # TODO: Initialize the nodes in parallel using multi-threads?
>       self.nodes = self._Nodes([MultiAsicSonicHost(self.ansible_adhoc, hostname, self, self.tbinfo['topo']['type'])
                                  for hostname in self.tbinfo["duts"] if hostname in self.duts])

self       = <[RecursionError('maximum recursion depth exceeded') raised in repr()] DutHosts object at 0x7f745363d550>

common/devices/duthosts.py:64: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

.0 = <list_iterator object at 0x7f745363d580>

>   self.nodes = self._Nodes([MultiAsicSonicHost(self.ansible_adhoc, hostname, self, self.tbinfo['topo']['type'])
                              for hostname in self.tbinfo["duts"] if hostname in self.duts])

.0         = <list_iterator object at 0x7f745363d580>
hostname   = 'vlab-c-01'
self       = <[RecursionError('maximum recursion depth exceeded') raised in repr()] DutHosts object at 0x7f745363d550>

common/devices/duthosts.py:64: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <[RecursionError('maximum recursion depth exceeded while calling a Python object') raised in repr()] MultiAsicSonicHost object at 0x7f745363df70>
ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
hostname = 'vlab-c-01'
duthosts = <[RecursionError('maximum recursion depth exceeded') raised in repr()] DutHosts object at 0x7f745363d550>
topo_type = 'ciscovs'

    def __init__(self, ansible_adhoc, hostname, duthosts, topo_type):
        """ Initializing a MultiAsicSonicHost.
    
        Args:
            ansible_adhoc : The pytest-ansible fixture
            hostname: Name of the host in the ansible inventory
        """
        self.duthosts = duthosts
        self.topo_type = topo_type
        self.loganalyzer = None
>       self.sonichost = SonicHost(ansible_adhoc, hostname)

ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
duthosts   = <[RecursionError('maximum recursion depth exceeded') raised in repr()] DutHosts object at 0x7f745363d550>
hostname   = 'vlab-c-01'
self       = <[RecursionError('maximum recursion depth exceeded while calling a Python object') raised in repr()] MultiAsicSonicHost object at 0x7f745363df70>
topo_type  = 'ciscovs'

common/devices/multi_asic.py:36: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <SonicHost vlab-c-01>
ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
hostname = 'vlab-c-01', shell_user = None, shell_passwd = None, ssh_user = None
ssh_passwd = None

    def __init__(self, ansible_adhoc, hostname,
                 shell_user=None, shell_passwd=None,
                 ssh_user=None, ssh_passwd=None):
        AnsibleHostBase.__init__(self, ansible_adhoc, hostname)
    
        self.DEFAULT_ASIC_SERVICES = ["bgp", "database", "lldp", "swss", "syncd", "teamd"]
    
        if shell_user and shell_passwd:
            im = self.host.options['inventory_manager']
            vm = self.host.options['variable_manager']
            sonic_conn = vm.get_vars(
                host=im.get_hosts(pattern='sonic')[0]
            )['ansible_connection']
            hostvars = vm.get_vars(host=im.get_host(hostname=self.hostname))
            # parse connection options and reset those options with
            # passed credentials
            connection_loader.get(sonic_conn, class_only=True)
            user_def = ansible_constants.config.get_configuration_definition(
                "remote_user", "connection", sonic_conn
            )
            pass_def = ansible_constants.config.get_configuration_definition(
                "password", "connection", sonic_conn
            )
            for user_var in (_['name'] for _ in user_def['vars']):
                if user_var in hostvars:
                    vm.extra_vars.update({user_var: shell_user})
            for pass_var in (_['name'] for _ in pass_def['vars']):
                if pass_var in hostvars:
                    vm.extra_vars.update({pass_var: shell_passwd})
    
        if ssh_user and ssh_passwd:
            evars = {
                'ansible_ssh_user': ssh_user,
                'ansible_ssh_pass': ssh_passwd,
            }
            self.host.options['variable_manager'].extra_vars.update(evars)
    
>       self._facts = self._gather_facts()

ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
hostname   = 'vlab-c-01'
self       = <SonicHost vlab-c-01>
shell_passwd = None
shell_user = None
ssh_passwd = None
ssh_user   = None

common/devices/sonic.py:86: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

args = (<SonicHost vlab-c-01>,), kargs = {}
_zone_getter = <function _get_default_zone at 0x7f74558a1550>
zone = 'vlab-c-01', cached_facts = <object object at 0x7f7459cf5770>

    def wrapper(*args, **kargs):
        _zone_getter = zone_getter or _get_default_zone
        zone = _zone_getter(target, args, kargs)
    
        cached_facts = cache.read(zone, name)
        if after_read:
            cached_facts = after_read(cached_facts, target, args, kargs)
        if cached_facts is not FactsCache.NOTEXIST:
            return cached_facts
        else:
>           facts = target(*args, **kargs)

_zone_getter = <function _get_default_zone at 0x7f74558a1550>
after_read = None
args       = (<SonicHost vlab-c-01>,)
before_write = None
cache      = <tests.common.cache.facts_cache.FactsCache object at 0x7f74568f7580>
cached_facts = <object object at 0x7f7459cf5770>
kargs      = {}
name       = 'basic_facts'
target     = <function SonicHost._gather_facts at 0x7f7453c98550>
zone       = 'vlab-c-01'
zone_getter = None

common/cache/facts_cache.py:228: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <SonicHost vlab-c-01>

    @cached(name='basic_facts')
    def _gather_facts(self):
        """
        Gather facts about the platform for this SONiC device.
        """
>       facts = self._get_platform_info()

self       = <SonicHost vlab-c-01>

common/devices/sonic.py:199: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <SonicHost vlab-c-01>

    def _get_platform_info(self):
        """
        Gets platform information about this SONiC device.
        """
    
>       platform_info = self.command("show platform summary")["stdout_lines"]

self       = <SonicHost vlab-c-01>

common/devices/sonic.py:311: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <SonicHost vlab-c-01>, module_args = ['show platform summary']
complex_args = {}
previous_frame = <frame at 0x7f74536ff840, file '/data/sonic-mgmt/tests/common/devices/sonic.py', line 311, code _get_platform_info>
filename = '/data/sonic-mgmt/tests/common/devices/sonic.py', line_number = 311
function_name = '_get_platform_info'
lines = ['        platform_info = self.command("show platform summary")["stdout_lines"]\n']
index = 0, verbose = True, module_ignore_errors = False, module_async = False

    def _run(self, *module_args, **complex_args):
    
        previous_frame = inspect.currentframe().f_back
        filename, line_number, function_name, lines, index = inspect.getframeinfo(previous_frame)
    
        verbose = complex_args.pop('verbose', True)
    
        if verbose:
            logger.debug(
                "{}::{}#{}: [{}] AnsibleModule::{}, args={}, kwargs={}".format(
                    filename,
                    function_name,
                    line_number,
                    self.hostname,
                    self.module_name,
                    json.dumps(module_args, cls=AnsibleHostBase.CustomEncoder),
                    json.dumps(complex_args, cls=AnsibleHostBase.CustomEncoder)
                )
            )
        else:
            logger.debug(
                "{}::{}#{}: [{}] AnsibleModule::{} executing...".format(
                    filename,
                    function_name,
                    line_number,
                    self.hostname,
                    self.module_name
                )
            )
    
        module_ignore_errors = complex_args.pop('module_ignore_errors', False)
        module_async = complex_args.pop('module_async', False)
    
        if module_async:
            def run_module(module_args, complex_args):
                return self.module(*module_args, **complex_args)[self.hostname]
            pool = ThreadPool()
            result = pool.apply_async(run_module, (module_args, complex_args))
            return pool, result
    
        module_args = json.loads(json.dumps(module_args, cls=AnsibleHostBase.CustomEncoder))
        complex_args = json.loads(json.dumps(complex_args, cls=AnsibleHostBase.CustomEncoder))
>       res = self.module(*module_args, **complex_args)[self.hostname]

complex_args = {}
filename   = '/data/sonic-mgmt/tests/common/devices/sonic.py'
function_name = '_get_platform_info'
index      = 0
line_number = 311
lines      = ['        platform_info = self.command("show platform summary")["stdout_lines"]\n']
module_args = ['show platform summary']
module_async = False
module_ignore_errors = False
previous_frame = <frame at 0x7f74536ff840, file '/data/sonic-mgmt/tests/common/devices/sonic.py', line 311, code _get_platform_info>
self       = <SonicHost vlab-c-01>
verbose    = True

common/devices/base.py:105: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <pytest_ansible.module_dispatcher.v28.ModuleDispatcherV28 object at 0x7f745372aeb0>
module_args = ('show platform summary',)
complex_args = {'_raw_params': 'show platform summary'}, hosts = [vlab-c-01]
no_hosts = False
args = ['pytest-ansible', 'vlab-c-01', '--connection=smart', '--become', '--become-method=sudo', '--become-user=root', ...]
verbosity = None, verbosity_syntax = '-vvvvv', argument = 'module-path'
arg_value = ['/data/sonic-mgmt/ansible/library']
cb = <pytest_ansible.module_dispatcher.v28.ResultAccumulator object at 0x7f7455a784f0>
kwargs = {'inventory': <ansible.inventory.manager.InventoryManager object at 0x7f745363d5e0>, 'loader': <ansible.parsing.datalo...ass': None}, 'stdout_callback': <pytest_ansible.module_dispatcher.v28.ResultAccumulator object at 0x7f7455a784f0>, ...}

    def _run(self, *module_args, **complex_args):
        """Execute an ansible adhoc command returning the result in a AdhocResult object."""
        # Assemble module argument string
        if module_args:
            complex_args.update(dict(_raw_params=' '.join(module_args)))
    
        # Assert hosts matching the provided pattern exist
        hosts = self.options['inventory_manager'].list_hosts()
        no_hosts = False
        if len(hosts) == 0:
            no_hosts = True
            warnings.warn("provided hosts list is empty, only localhost is available")
    
        self.options['inventory_manager'].subset(self.options.get('subset'))
        hosts = self.options['inventory_manager'].list_hosts(self.options['host_pattern'])
        if len(hosts) == 0 and not no_hosts:
            raise ansible.errors.AnsibleError("Specified hosts and/or --limit does not match any hosts")
    
        # Pass along cli options
        args = ['pytest-ansible']
        verbosity = None
        for verbosity_syntax in ('-v', '-vv', '-vvv', '-vvvv', '-vvvvv'):
            if verbosity_syntax in sys.argv:
                verbosity = verbosity_syntax
                break
        if verbosity is not None:
            args.append(verbosity_syntax)
        args.extend([self.options['host_pattern']])
        for argument in ('connection', 'user', 'become', 'become_method', 'become_user', 'module_path'):
            arg_value = self.options.get(argument)
            argument = argument.replace('_', '-')
    
            if arg_value in (None, False):
                continue
    
            if arg_value is True:
                args.append('--{0}'.format(argument))
            else:
                args.append('--{0}={1}'.format(argument, arg_value))
    
        # Use Ansible's own adhoc cli to parse the fake command line we created and then save it
        # into Ansible's global context
        adhoc = AdHocCLI(args)
        adhoc.parse()
    
        # And now we'll never speak of this again
        del adhoc
    
        # Initialize callback to capture module JSON responses
        cb = ResultAccumulator()
    
        kwargs = dict(
            inventory=self.options['inventory_manager'],
            variable_manager=self.options['variable_manager'],
            loader=self.options['loader'],
            stdout_callback=cb,
            passwords=dict(conn_pass=None, become_pass=None),
        )
    
        # create a pseudo-play to execute the specified module via a single task
        play_ds = dict(
            name="pytest-ansible",
            hosts=self.options['host_pattern'],
            become=self.options.get('become'),
            become_user=self.options.get('become_user'),
            gather_facts='no',
            tasks=[
                dict(
                    action=dict(
                        module=self.options['module_name'], args=complex_args
                    ),
                ),
            ]
        )
        play = Play().load(play_ds, variable_manager=self.options['variable_manager'], loader=self.options['loader'])
    
        # now create a task queue manager to execute the play
        tqm = None
        try:
            tqm = TaskQueueManager(**kwargs)
            tqm.run(play)
        finally:
            if tqm:
                tqm.cleanup()
    
    
        # Raise exception if host(s) unreachable
        # FIXME - if multiple hosts were involved, should an exception be raised?
        if cb.unreachable:
>           raise AnsibleConnectionFailure("Host unreachable", dark=cb.unreachable, contacted=cb.contacted)
E           pytest_ansible.errors.AnsibleConnectionFailure: Host unreachable

arg_value  = ['/data/sonic-mgmt/ansible/library']
args       = ['pytest-ansible', 'vlab-c-01', '--connection=smart', '--become', '--become-method=sudo', '--become-user=root', ...]
argument   = 'module-path'
cb         = <pytest_ansible.module_dispatcher.v28.ResultAccumulator object at 0x7f7455a784f0>
complex_args = {'_raw_params': 'show platform summary'}
hosts      = [vlab-c-01]
kwargs     = {'inventory': <ansible.inventory.manager.InventoryManager object at 0x7f745363d5e0>, 'loader': <ansible.parsing.datalo...ass': None}, 'stdout_callback': <pytest_ansible.module_dispatcher.v28.ResultAccumulator object at 0x7f7455a784f0>, ...}
module_args = ('show platform summary',)
no_hosts   = False
play       = pytest-ansible
play_ds    = {'become': True, 'become_user': 'root', 'gather_facts': 'no', 'hosts': 'vlab-c-01', ...}
self       = <pytest_ansible.module_dispatcher.v28.ModuleDispatcherV28 object at 0x7f745372aeb0>
tqm        = <ansible.executor.task_queue_manager.TaskQueueManager object at 0x7f74536b72e0>
verbosity  = None
verbosity_syntax = '-vvvvv'

/home/ubuntu/env-python3/lib/python3.8/site-packages/pytest_ansible/module_dispatcher/v28.py:159: AnsibleConnectionFailure

During handling of the above exception, another exception occurred:

enhance_inventory = None
ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
tbinfo = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}
request = <SubRequest 'duthosts' for <Function test_interface_on_each_node>>

    @pytest.fixture(name="duthosts", scope="session")
    def fixture_duthosts(enhance_inventory, ansible_adhoc, tbinfo, request):
        """
        @summary: fixture to get DUT hosts defined in testbed.
        @param ansible_adhoc: Fixture provided by the pytest-ansible package.
            Source of the various device objects. It is
            mandatory argument for the class constructors.
        @param tbinfo: fixture provides information about testbed.
        """
        try:
            host = DutHosts(ansible_adhoc, tbinfo, get_specified_duts(request))
            return host
        except BaseException as e:
            logger.error("Failed to initialize duthosts.")
            request.config.cache.set("duthosts_fixture_failed", True)
>           pt_assert(False, "!!!!!!!!!!!!!!!! duthosts fixture failed !!!!!!!!!!!!!!!!"
                      "Exception: {}".format(repr(e)))
E           Failed: !!!!!!!!!!!!!!!! duthosts fixture failed !!!!!!!!!!!!!!!!Exception: Host unreachable

ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
enhance_inventory = None
request    = <SubRequest 'duthosts' for <Function test_interface_on_each_node>>
tbinfo     = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}

conftest.py:368: Failed
________________ ERROR at setup of test_traffic_check_via_trex _________________

enhance_inventory = None
ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
tbinfo = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}
request = <SubRequest 'duthosts' for <Function test_interface_on_each_node>>

    @pytest.fixture(name="duthosts", scope="session")
    def fixture_duthosts(enhance_inventory, ansible_adhoc, tbinfo, request):
        """
        @summary: fixture to get DUT hosts defined in testbed.
        @param ansible_adhoc: Fixture provided by the pytest-ansible package.
            Source of the various device objects. It is
            mandatory argument for the class constructors.
        @param tbinfo: fixture provides information about testbed.
        """
        try:
>           host = DutHosts(ansible_adhoc, tbinfo, get_specified_duts(request))

ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
enhance_inventory = None
request    = <SubRequest 'duthosts' for <Function test_interface_on_each_node>>
tbinfo     = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}

conftest.py:363: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <[RecursionError('maximum recursion depth exceeded') raised in repr()] DutHosts object at 0x7f745363d550>
ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
tbinfo = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}
duts = ['vlab-c-01']

    def __init__(self, ansible_adhoc, tbinfo, duts):
        """ Initialize a multi-dut testbed with all the DUT's defined in testbed info.
    
        Args:
            ansible_adhoc: The pytest-ansible fixture
            tbinfo - Testbed info whose "duts" holds the hostnames for the DUT's in the multi-dut testbed.
            duts - list of DUT hostnames from the `--host-pattern` CLI option. Can be specified if only a subset of
                   DUTs in the testbed should be used
    
        """
        self.ansible_adhoc = ansible_adhoc
        self.tbinfo = tbinfo
        self.duts = duts
>       self.__initialize_nodes()

ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
duts       = ['vlab-c-01']
self       = <[RecursionError('maximum recursion depth exceeded') raised in repr()] DutHosts object at 0x7f745363d550>
tbinfo     = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}

common/devices/duthosts.py:60: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <[RecursionError('maximum recursion depth exceeded') raised in repr()] DutHosts object at 0x7f745363d550>

    def __initialize_nodes(self):
        # TODO: Initialize the nodes in parallel using multi-threads?
>       self.nodes = self._Nodes([MultiAsicSonicHost(self.ansible_adhoc, hostname, self, self.tbinfo['topo']['type'])
                                  for hostname in self.tbinfo["duts"] if hostname in self.duts])

self       = <[RecursionError('maximum recursion depth exceeded') raised in repr()] DutHosts object at 0x7f745363d550>

common/devices/duthosts.py:64: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

.0 = <list_iterator object at 0x7f745363d580>

>   self.nodes = self._Nodes([MultiAsicSonicHost(self.ansible_adhoc, hostname, self, self.tbinfo['topo']['type'])
                              for hostname in self.tbinfo["duts"] if hostname in self.duts])

.0         = <list_iterator object at 0x7f745363d580>
hostname   = 'vlab-c-01'
self       = <[RecursionError('maximum recursion depth exceeded') raised in repr()] DutHosts object at 0x7f745363d550>

common/devices/duthosts.py:64: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <[RecursionError('maximum recursion depth exceeded while calling a Python object') raised in repr()] MultiAsicSonicHost object at 0x7f745363df70>
ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
hostname = 'vlab-c-01'
duthosts = <[RecursionError('maximum recursion depth exceeded') raised in repr()] DutHosts object at 0x7f745363d550>
topo_type = 'ciscovs'

    def __init__(self, ansible_adhoc, hostname, duthosts, topo_type):
        """ Initializing a MultiAsicSonicHost.
    
        Args:
            ansible_adhoc : The pytest-ansible fixture
            hostname: Name of the host in the ansible inventory
        """
        self.duthosts = duthosts
        self.topo_type = topo_type
        self.loganalyzer = None
>       self.sonichost = SonicHost(ansible_adhoc, hostname)

ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
duthosts   = <[RecursionError('maximum recursion depth exceeded') raised in repr()] DutHosts object at 0x7f745363d550>
hostname   = 'vlab-c-01'
self       = <[RecursionError('maximum recursion depth exceeded while calling a Python object') raised in repr()] MultiAsicSonicHost object at 0x7f745363df70>
topo_type  = 'ciscovs'

common/devices/multi_asic.py:36: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <SonicHost vlab-c-01>
ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
hostname = 'vlab-c-01', shell_user = None, shell_passwd = None, ssh_user = None
ssh_passwd = None

    def __init__(self, ansible_adhoc, hostname,
                 shell_user=None, shell_passwd=None,
                 ssh_user=None, ssh_passwd=None):
        AnsibleHostBase.__init__(self, ansible_adhoc, hostname)
    
        self.DEFAULT_ASIC_SERVICES = ["bgp", "database", "lldp", "swss", "syncd", "teamd"]
    
        if shell_user and shell_passwd:
            im = self.host.options['inventory_manager']
            vm = self.host.options['variable_manager']
            sonic_conn = vm.get_vars(
                host=im.get_hosts(pattern='sonic')[0]
            )['ansible_connection']
            hostvars = vm.get_vars(host=im.get_host(hostname=self.hostname))
            # parse connection options and reset those options with
            # passed credentials
            connection_loader.get(sonic_conn, class_only=True)
            user_def = ansible_constants.config.get_configuration_definition(
                "remote_user", "connection", sonic_conn
            )
            pass_def = ansible_constants.config.get_configuration_definition(
                "password", "connection", sonic_conn
            )
            for user_var in (_['name'] for _ in user_def['vars']):
                if user_var in hostvars:
                    vm.extra_vars.update({user_var: shell_user})
            for pass_var in (_['name'] for _ in pass_def['vars']):
                if pass_var in hostvars:
                    vm.extra_vars.update({pass_var: shell_passwd})
    
        if ssh_user and ssh_passwd:
            evars = {
                'ansible_ssh_user': ssh_user,
                'ansible_ssh_pass': ssh_passwd,
            }
            self.host.options['variable_manager'].extra_vars.update(evars)
    
>       self._facts = self._gather_facts()

ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
hostname   = 'vlab-c-01'
self       = <SonicHost vlab-c-01>
shell_passwd = None
shell_user = None
ssh_passwd = None
ssh_user   = None

common/devices/sonic.py:86: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

args = (<SonicHost vlab-c-01>,), kargs = {}
_zone_getter = <function _get_default_zone at 0x7f74558a1550>
zone = 'vlab-c-01', cached_facts = <object object at 0x7f7459cf5770>

    def wrapper(*args, **kargs):
        _zone_getter = zone_getter or _get_default_zone
        zone = _zone_getter(target, args, kargs)
    
        cached_facts = cache.read(zone, name)
        if after_read:
            cached_facts = after_read(cached_facts, target, args, kargs)
        if cached_facts is not FactsCache.NOTEXIST:
            return cached_facts
        else:
>           facts = target(*args, **kargs)

_zone_getter = <function _get_default_zone at 0x7f74558a1550>
after_read = None
args       = (<SonicHost vlab-c-01>,)
before_write = None
cache      = <tests.common.cache.facts_cache.FactsCache object at 0x7f74568f7580>
cached_facts = <object object at 0x7f7459cf5770>
kargs      = {}
name       = 'basic_facts'
target     = <function SonicHost._gather_facts at 0x7f7453c98550>
zone       = 'vlab-c-01'
zone_getter = None

common/cache/facts_cache.py:228: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <SonicHost vlab-c-01>

    @cached(name='basic_facts')
    def _gather_facts(self):
        """
        Gather facts about the platform for this SONiC device.
        """
>       facts = self._get_platform_info()

self       = <SonicHost vlab-c-01>

common/devices/sonic.py:199: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <SonicHost vlab-c-01>

    def _get_platform_info(self):
        """
        Gets platform information about this SONiC device.
        """
    
>       platform_info = self.command("show platform summary")["stdout_lines"]

self       = <SonicHost vlab-c-01>

common/devices/sonic.py:311: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <SonicHost vlab-c-01>, module_args = ['show platform summary']
complex_args = {}
previous_frame = <frame at 0x7f74536ff840, file '/data/sonic-mgmt/tests/common/devices/sonic.py', line 311, code _get_platform_info>
filename = '/data/sonic-mgmt/tests/common/devices/sonic.py', line_number = 311
function_name = '_get_platform_info'
lines = ['        platform_info = self.command("show platform summary")["stdout_lines"]\n']
index = 0, verbose = True, module_ignore_errors = False, module_async = False

    def _run(self, *module_args, **complex_args):
    
        previous_frame = inspect.currentframe().f_back
        filename, line_number, function_name, lines, index = inspect.getframeinfo(previous_frame)
    
        verbose = complex_args.pop('verbose', True)
    
        if verbose:
            logger.debug(
                "{}::{}#{}: [{}] AnsibleModule::{}, args={}, kwargs={}".format(
                    filename,
                    function_name,
                    line_number,
                    self.hostname,
                    self.module_name,
                    json.dumps(module_args, cls=AnsibleHostBase.CustomEncoder),
                    json.dumps(complex_args, cls=AnsibleHostBase.CustomEncoder)
                )
            )
        else:
            logger.debug(
                "{}::{}#{}: [{}] AnsibleModule::{} executing...".format(
                    filename,
                    function_name,
                    line_number,
                    self.hostname,
                    self.module_name
                )
            )
    
        module_ignore_errors = complex_args.pop('module_ignore_errors', False)
        module_async = complex_args.pop('module_async', False)
    
        if module_async:
            def run_module(module_args, complex_args):
                return self.module(*module_args, **complex_args)[self.hostname]
            pool = ThreadPool()
            result = pool.apply_async(run_module, (module_args, complex_args))
            return pool, result
    
        module_args = json.loads(json.dumps(module_args, cls=AnsibleHostBase.CustomEncoder))
        complex_args = json.loads(json.dumps(complex_args, cls=AnsibleHostBase.CustomEncoder))
>       res = self.module(*module_args, **complex_args)[self.hostname]

complex_args = {}
filename   = '/data/sonic-mgmt/tests/common/devices/sonic.py'
function_name = '_get_platform_info'
index      = 0
line_number = 311
lines      = ['        platform_info = self.command("show platform summary")["stdout_lines"]\n']
module_args = ['show platform summary']
module_async = False
module_ignore_errors = False
previous_frame = <frame at 0x7f74536ff840, file '/data/sonic-mgmt/tests/common/devices/sonic.py', line 311, code _get_platform_info>
self       = <SonicHost vlab-c-01>
verbose    = True

common/devices/base.py:105: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <pytest_ansible.module_dispatcher.v28.ModuleDispatcherV28 object at 0x7f745372aeb0>
module_args = ('show platform summary',)
complex_args = {'_raw_params': 'show platform summary'}, hosts = [vlab-c-01]
no_hosts = False
args = ['pytest-ansible', 'vlab-c-01', '--connection=smart', '--become', '--become-method=sudo', '--become-user=root', ...]
verbosity = None, verbosity_syntax = '-vvvvv', argument = 'module-path'
arg_value = ['/data/sonic-mgmt/ansible/library']
cb = <pytest_ansible.module_dispatcher.v28.ResultAccumulator object at 0x7f7455a784f0>
kwargs = {'inventory': <ansible.inventory.manager.InventoryManager object at 0x7f745363d5e0>, 'loader': <ansible.parsing.datalo...ass': None}, 'stdout_callback': <pytest_ansible.module_dispatcher.v28.ResultAccumulator object at 0x7f7455a784f0>, ...}

    def _run(self, *module_args, **complex_args):
        """Execute an ansible adhoc command returning the result in a AdhocResult object."""
        # Assemble module argument string
        if module_args:
            complex_args.update(dict(_raw_params=' '.join(module_args)))
    
        # Assert hosts matching the provided pattern exist
        hosts = self.options['inventory_manager'].list_hosts()
        no_hosts = False
        if len(hosts) == 0:
            no_hosts = True
            warnings.warn("provided hosts list is empty, only localhost is available")
    
        self.options['inventory_manager'].subset(self.options.get('subset'))
        hosts = self.options['inventory_manager'].list_hosts(self.options['host_pattern'])
        if len(hosts) == 0 and not no_hosts:
            raise ansible.errors.AnsibleError("Specified hosts and/or --limit does not match any hosts")
    
        # Pass along cli options
        args = ['pytest-ansible']
        verbosity = None
        for verbosity_syntax in ('-v', '-vv', '-vvv', '-vvvv', '-vvvvv'):
            if verbosity_syntax in sys.argv:
                verbosity = verbosity_syntax
                break
        if verbosity is not None:
            args.append(verbosity_syntax)
        args.extend([self.options['host_pattern']])
        for argument in ('connection', 'user', 'become', 'become_method', 'become_user', 'module_path'):
            arg_value = self.options.get(argument)
            argument = argument.replace('_', '-')
    
            if arg_value in (None, False):
                continue
    
            if arg_value is True:
                args.append('--{0}'.format(argument))
            else:
                args.append('--{0}={1}'.format(argument, arg_value))
    
        # Use Ansible's own adhoc cli to parse the fake command line we created and then save it
        # into Ansible's global context
        adhoc = AdHocCLI(args)
        adhoc.parse()
    
        # And now we'll never speak of this again
        del adhoc
    
        # Initialize callback to capture module JSON responses
        cb = ResultAccumulator()
    
        kwargs = dict(
            inventory=self.options['inventory_manager'],
            variable_manager=self.options['variable_manager'],
            loader=self.options['loader'],
            stdout_callback=cb,
            passwords=dict(conn_pass=None, become_pass=None),
        )
    
        # create a pseudo-play to execute the specified module via a single task
        play_ds = dict(
            name="pytest-ansible",
            hosts=self.options['host_pattern'],
            become=self.options.get('become'),
            become_user=self.options.get('become_user'),
            gather_facts='no',
            tasks=[
                dict(
                    action=dict(
                        module=self.options['module_name'], args=complex_args
                    ),
                ),
            ]
        )
        play = Play().load(play_ds, variable_manager=self.options['variable_manager'], loader=self.options['loader'])
    
        # now create a task queue manager to execute the play
        tqm = None
        try:
            tqm = TaskQueueManager(**kwargs)
            tqm.run(play)
        finally:
            if tqm:
                tqm.cleanup()
    
    
        # Raise exception if host(s) unreachable
        # FIXME - if multiple hosts were involved, should an exception be raised?
        if cb.unreachable:
>           raise AnsibleConnectionFailure("Host unreachable", dark=cb.unreachable, contacted=cb.contacted)
E           pytest_ansible.errors.AnsibleConnectionFailure: Host unreachable

arg_value  = ['/data/sonic-mgmt/ansible/library']
args       = ['pytest-ansible', 'vlab-c-01', '--connection=smart', '--become', '--become-method=sudo', '--become-user=root', ...]
argument   = 'module-path'
cb         = <pytest_ansible.module_dispatcher.v28.ResultAccumulator object at 0x7f7455a784f0>
complex_args = {'_raw_params': 'show platform summary'}
hosts      = [vlab-c-01]
kwargs     = {'inventory': <ansible.inventory.manager.InventoryManager object at 0x7f745363d5e0>, 'loader': <ansible.parsing.datalo...ass': None}, 'stdout_callback': <pytest_ansible.module_dispatcher.v28.ResultAccumulator object at 0x7f7455a784f0>, ...}
module_args = ('show platform summary',)
no_hosts   = False
play       = pytest-ansible
play_ds    = {'become': True, 'become_user': 'root', 'gather_facts': 'no', 'hosts': 'vlab-c-01', ...}
self       = <pytest_ansible.module_dispatcher.v28.ModuleDispatcherV28 object at 0x7f745372aeb0>
tqm        = <ansible.executor.task_queue_manager.TaskQueueManager object at 0x7f74536b72e0>
verbosity  = None
verbosity_syntax = '-vvvvv'

/home/ubuntu/env-python3/lib/python3.8/site-packages/pytest_ansible/module_dispatcher/v28.py:159: AnsibleConnectionFailure

During handling of the above exception, another exception occurred:

enhance_inventory = None
ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
tbinfo = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}
request = <SubRequest 'duthosts' for <Function test_interface_on_each_node>>

    @pytest.fixture(name="duthosts", scope="session")
    def fixture_duthosts(enhance_inventory, ansible_adhoc, tbinfo, request):
        """
        @summary: fixture to get DUT hosts defined in testbed.
        @param ansible_adhoc: Fixture provided by the pytest-ansible package.
            Source of the various device objects. It is
            mandatory argument for the class constructors.
        @param tbinfo: fixture provides information about testbed.
        """
        try:
            host = DutHosts(ansible_adhoc, tbinfo, get_specified_duts(request))
            return host
        except BaseException as e:
            logger.error("Failed to initialize duthosts.")
            request.config.cache.set("duthosts_fixture_failed", True)
>           pt_assert(False, "!!!!!!!!!!!!!!!! duthosts fixture failed !!!!!!!!!!!!!!!!"
                      "Exception: {}".format(repr(e)))
E           Failed: !!!!!!!!!!!!!!!! duthosts fixture failed !!!!!!!!!!!!!!!!Exception: Host unreachable

ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
enhance_inventory = None
request    = <SubRequest 'duthosts' for <Function test_interface_on_each_node>>
tbinfo     = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}

conftest.py:368: Failed
_________________ ERROR at setup of test_traffic_check_via_ptf _________________

enhance_inventory = None
ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
tbinfo = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}
request = <SubRequest 'duthosts' for <Function test_interface_on_each_node>>

    @pytest.fixture(name="duthosts", scope="session")
    def fixture_duthosts(enhance_inventory, ansible_adhoc, tbinfo, request):
        """
        @summary: fixture to get DUT hosts defined in testbed.
        @param ansible_adhoc: Fixture provided by the pytest-ansible package.
            Source of the various device objects. It is
            mandatory argument for the class constructors.
        @param tbinfo: fixture provides information about testbed.
        """
        try:
>           host = DutHosts(ansible_adhoc, tbinfo, get_specified_duts(request))

ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
enhance_inventory = None
request    = <SubRequest 'duthosts' for <Function test_interface_on_each_node>>
tbinfo     = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}

conftest.py:363: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <[RecursionError('maximum recursion depth exceeded') raised in repr()] DutHosts object at 0x7f745363d550>
ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
tbinfo = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}
duts = ['vlab-c-01']

    def __init__(self, ansible_adhoc, tbinfo, duts):
        """ Initialize a multi-dut testbed with all the DUT's defined in testbed info.
    
        Args:
            ansible_adhoc: The pytest-ansible fixture
            tbinfo - Testbed info whose "duts" holds the hostnames for the DUT's in the multi-dut testbed.
            duts - list of DUT hostnames from the `--host-pattern` CLI option. Can be specified if only a subset of
                   DUTs in the testbed should be used
    
        """
        self.ansible_adhoc = ansible_adhoc
        self.tbinfo = tbinfo
        self.duts = duts
>       self.__initialize_nodes()

ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
duts       = ['vlab-c-01']
self       = <[RecursionError('maximum recursion depth exceeded') raised in repr()] DutHosts object at 0x7f745363d550>
tbinfo     = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}

common/devices/duthosts.py:60: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <[RecursionError('maximum recursion depth exceeded') raised in repr()] DutHosts object at 0x7f745363d550>

    def __initialize_nodes(self):
        # TODO: Initialize the nodes in parallel using multi-threads?
>       self.nodes = self._Nodes([MultiAsicSonicHost(self.ansible_adhoc, hostname, self, self.tbinfo['topo']['type'])
                                  for hostname in self.tbinfo["duts"] if hostname in self.duts])

self       = <[RecursionError('maximum recursion depth exceeded') raised in repr()] DutHosts object at 0x7f745363d550>

common/devices/duthosts.py:64: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

.0 = <list_iterator object at 0x7f745363d580>

>   self.nodes = self._Nodes([MultiAsicSonicHost(self.ansible_adhoc, hostname, self, self.tbinfo['topo']['type'])
                              for hostname in self.tbinfo["duts"] if hostname in self.duts])

.0         = <list_iterator object at 0x7f745363d580>
hostname   = 'vlab-c-01'
self       = <[RecursionError('maximum recursion depth exceeded') raised in repr()] DutHosts object at 0x7f745363d550>

common/devices/duthosts.py:64: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <[RecursionError('maximum recursion depth exceeded while calling a Python object') raised in repr()] MultiAsicSonicHost object at 0x7f745363df70>
ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
hostname = 'vlab-c-01'
duthosts = <[RecursionError('maximum recursion depth exceeded') raised in repr()] DutHosts object at 0x7f745363d550>
topo_type = 'ciscovs'

    def __init__(self, ansible_adhoc, hostname, duthosts, topo_type):
        """ Initializing a MultiAsicSonicHost.
    
        Args:
            ansible_adhoc : The pytest-ansible fixture
            hostname: Name of the host in the ansible inventory
        """
        self.duthosts = duthosts
        self.topo_type = topo_type
        self.loganalyzer = None
>       self.sonichost = SonicHost(ansible_adhoc, hostname)

ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
duthosts   = <[RecursionError('maximum recursion depth exceeded') raised in repr()] DutHosts object at 0x7f745363d550>
hostname   = 'vlab-c-01'
self       = <[RecursionError('maximum recursion depth exceeded while calling a Python object') raised in repr()] MultiAsicSonicHost object at 0x7f745363df70>
topo_type  = 'ciscovs'

common/devices/multi_asic.py:36: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <SonicHost vlab-c-01>
ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
hostname = 'vlab-c-01', shell_user = None, shell_passwd = None, ssh_user = None
ssh_passwd = None

    def __init__(self, ansible_adhoc, hostname,
                 shell_user=None, shell_passwd=None,
                 ssh_user=None, ssh_passwd=None):
        AnsibleHostBase.__init__(self, ansible_adhoc, hostname)
    
        self.DEFAULT_ASIC_SERVICES = ["bgp", "database", "lldp", "swss", "syncd", "teamd"]
    
        if shell_user and shell_passwd:
            im = self.host.options['inventory_manager']
            vm = self.host.options['variable_manager']
            sonic_conn = vm.get_vars(
                host=im.get_hosts(pattern='sonic')[0]
            )['ansible_connection']
            hostvars = vm.get_vars(host=im.get_host(hostname=self.hostname))
            # parse connection options and reset those options with
            # passed credentials
            connection_loader.get(sonic_conn, class_only=True)
            user_def = ansible_constants.config.get_configuration_definition(
                "remote_user", "connection", sonic_conn
            )
            pass_def = ansible_constants.config.get_configuration_definition(
                "password", "connection", sonic_conn
            )
            for user_var in (_['name'] for _ in user_def['vars']):
                if user_var in hostvars:
                    vm.extra_vars.update({user_var: shell_user})
            for pass_var in (_['name'] for _ in pass_def['vars']):
                if pass_var in hostvars:
                    vm.extra_vars.update({pass_var: shell_passwd})
    
        if ssh_user and ssh_passwd:
            evars = {
                'ansible_ssh_user': ssh_user,
                'ansible_ssh_pass': ssh_passwd,
            }
            self.host.options['variable_manager'].extra_vars.update(evars)
    
>       self._facts = self._gather_facts()

ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
hostname   = 'vlab-c-01'
self       = <SonicHost vlab-c-01>
shell_passwd = None
shell_user = None
ssh_passwd = None
ssh_user   = None

common/devices/sonic.py:86: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

args = (<SonicHost vlab-c-01>,), kargs = {}
_zone_getter = <function _get_default_zone at 0x7f74558a1550>
zone = 'vlab-c-01', cached_facts = <object object at 0x7f7459cf5770>

    def wrapper(*args, **kargs):
        _zone_getter = zone_getter or _get_default_zone
        zone = _zone_getter(target, args, kargs)
    
        cached_facts = cache.read(zone, name)
        if after_read:
            cached_facts = after_read(cached_facts, target, args, kargs)
        if cached_facts is not FactsCache.NOTEXIST:
            return cached_facts
        else:
>           facts = target(*args, **kargs)

_zone_getter = <function _get_default_zone at 0x7f74558a1550>
after_read = None
args       = (<SonicHost vlab-c-01>,)
before_write = None
cache      = <tests.common.cache.facts_cache.FactsCache object at 0x7f74568f7580>
cached_facts = <object object at 0x7f7459cf5770>
kargs      = {}
name       = 'basic_facts'
target     = <function SonicHost._gather_facts at 0x7f7453c98550>
zone       = 'vlab-c-01'
zone_getter = None

common/cache/facts_cache.py:228: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <SonicHost vlab-c-01>

    @cached(name='basic_facts')
    def _gather_facts(self):
        """
        Gather facts about the platform for this SONiC device.
        """
>       facts = self._get_platform_info()

self       = <SonicHost vlab-c-01>

common/devices/sonic.py:199: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <SonicHost vlab-c-01>

    def _get_platform_info(self):
        """
        Gets platform information about this SONiC device.
        """
    
>       platform_info = self.command("show platform summary")["stdout_lines"]

self       = <SonicHost vlab-c-01>

common/devices/sonic.py:311: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <SonicHost vlab-c-01>, module_args = ['show platform summary']
complex_args = {}
previous_frame = <frame at 0x7f74536ff840, file '/data/sonic-mgmt/tests/common/devices/sonic.py', line 311, code _get_platform_info>
filename = '/data/sonic-mgmt/tests/common/devices/sonic.py', line_number = 311
function_name = '_get_platform_info'
lines = ['        platform_info = self.command("show platform summary")["stdout_lines"]\n']
index = 0, verbose = True, module_ignore_errors = False, module_async = False

    def _run(self, *module_args, **complex_args):
    
        previous_frame = inspect.currentframe().f_back
        filename, line_number, function_name, lines, index = inspect.getframeinfo(previous_frame)
    
        verbose = complex_args.pop('verbose', True)
    
        if verbose:
            logger.debug(
                "{}::{}#{}: [{}] AnsibleModule::{}, args={}, kwargs={}".format(
                    filename,
                    function_name,
                    line_number,
                    self.hostname,
                    self.module_name,
                    json.dumps(module_args, cls=AnsibleHostBase.CustomEncoder),
                    json.dumps(complex_args, cls=AnsibleHostBase.CustomEncoder)
                )
            )
        else:
            logger.debug(
                "{}::{}#{}: [{}] AnsibleModule::{} executing...".format(
                    filename,
                    function_name,
                    line_number,
                    self.hostname,
                    self.module_name
                )
            )
    
        module_ignore_errors = complex_args.pop('module_ignore_errors', False)
        module_async = complex_args.pop('module_async', False)
    
        if module_async:
            def run_module(module_args, complex_args):
                return self.module(*module_args, **complex_args)[self.hostname]
            pool = ThreadPool()
            result = pool.apply_async(run_module, (module_args, complex_args))
            return pool, result
    
        module_args = json.loads(json.dumps(module_args, cls=AnsibleHostBase.CustomEncoder))
        complex_args = json.loads(json.dumps(complex_args, cls=AnsibleHostBase.CustomEncoder))
>       res = self.module(*module_args, **complex_args)[self.hostname]

complex_args = {}
filename   = '/data/sonic-mgmt/tests/common/devices/sonic.py'
function_name = '_get_platform_info'
index      = 0
line_number = 311
lines      = ['        platform_info = self.command("show platform summary")["stdout_lines"]\n']
module_args = ['show platform summary']
module_async = False
module_ignore_errors = False
previous_frame = <frame at 0x7f74536ff840, file '/data/sonic-mgmt/tests/common/devices/sonic.py', line 311, code _get_platform_info>
self       = <SonicHost vlab-c-01>
verbose    = True

common/devices/base.py:105: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <pytest_ansible.module_dispatcher.v28.ModuleDispatcherV28 object at 0x7f745372aeb0>
module_args = ('show platform summary',)
complex_args = {'_raw_params': 'show platform summary'}, hosts = [vlab-c-01]
no_hosts = False
args = ['pytest-ansible', 'vlab-c-01', '--connection=smart', '--become', '--become-method=sudo', '--become-user=root', ...]
verbosity = None, verbosity_syntax = '-vvvvv', argument = 'module-path'
arg_value = ['/data/sonic-mgmt/ansible/library']
cb = <pytest_ansible.module_dispatcher.v28.ResultAccumulator object at 0x7f7455a784f0>
kwargs = {'inventory': <ansible.inventory.manager.InventoryManager object at 0x7f745363d5e0>, 'loader': <ansible.parsing.datalo...ass': None}, 'stdout_callback': <pytest_ansible.module_dispatcher.v28.ResultAccumulator object at 0x7f7455a784f0>, ...}

    def _run(self, *module_args, **complex_args):
        """Execute an ansible adhoc command returning the result in a AdhocResult object."""
        # Assemble module argument string
        if module_args:
            complex_args.update(dict(_raw_params=' '.join(module_args)))
    
        # Assert hosts matching the provided pattern exist
        hosts = self.options['inventory_manager'].list_hosts()
        no_hosts = False
        if len(hosts) == 0:
            no_hosts = True
            warnings.warn("provided hosts list is empty, only localhost is available")
    
        self.options['inventory_manager'].subset(self.options.get('subset'))
        hosts = self.options['inventory_manager'].list_hosts(self.options['host_pattern'])
        if len(hosts) == 0 and not no_hosts:
            raise ansible.errors.AnsibleError("Specified hosts and/or --limit does not match any hosts")
    
        # Pass along cli options
        args = ['pytest-ansible']
        verbosity = None
        for verbosity_syntax in ('-v', '-vv', '-vvv', '-vvvv', '-vvvvv'):
            if verbosity_syntax in sys.argv:
                verbosity = verbosity_syntax
                break
        if verbosity is not None:
            args.append(verbosity_syntax)
        args.extend([self.options['host_pattern']])
        for argument in ('connection', 'user', 'become', 'become_method', 'become_user', 'module_path'):
            arg_value = self.options.get(argument)
            argument = argument.replace('_', '-')
    
            if arg_value in (None, False):
                continue
    
            if arg_value is True:
                args.append('--{0}'.format(argument))
            else:
                args.append('--{0}={1}'.format(argument, arg_value))
    
        # Use Ansible's own adhoc cli to parse the fake command line we created and then save it
        # into Ansible's global context
        adhoc = AdHocCLI(args)
        adhoc.parse()
    
        # And now we'll never speak of this again
        del adhoc
    
        # Initialize callback to capture module JSON responses
        cb = ResultAccumulator()
    
        kwargs = dict(
            inventory=self.options['inventory_manager'],
            variable_manager=self.options['variable_manager'],
            loader=self.options['loader'],
            stdout_callback=cb,
            passwords=dict(conn_pass=None, become_pass=None),
        )
    
        # create a pseudo-play to execute the specified module via a single task
        play_ds = dict(
            name="pytest-ansible",
            hosts=self.options['host_pattern'],
            become=self.options.get('become'),
            become_user=self.options.get('become_user'),
            gather_facts='no',
            tasks=[
                dict(
                    action=dict(
                        module=self.options['module_name'], args=complex_args
                    ),
                ),
            ]
        )
        play = Play().load(play_ds, variable_manager=self.options['variable_manager'], loader=self.options['loader'])
    
        # now create a task queue manager to execute the play
        tqm = None
        try:
            tqm = TaskQueueManager(**kwargs)
            tqm.run(play)
        finally:
            if tqm:
                tqm.cleanup()
    
    
        # Raise exception if host(s) unreachable
        # FIXME - if multiple hosts were involved, should an exception be raised?
        if cb.unreachable:
>           raise AnsibleConnectionFailure("Host unreachable", dark=cb.unreachable, contacted=cb.contacted)
E           pytest_ansible.errors.AnsibleConnectionFailure: Host unreachable

arg_value  = ['/data/sonic-mgmt/ansible/library']
args       = ['pytest-ansible', 'vlab-c-01', '--connection=smart', '--become', '--become-method=sudo', '--become-user=root', ...]
argument   = 'module-path'
cb         = <pytest_ansible.module_dispatcher.v28.ResultAccumulator object at 0x7f7455a784f0>
complex_args = {'_raw_params': 'show platform summary'}
hosts      = [vlab-c-01]
kwargs     = {'inventory': <ansible.inventory.manager.InventoryManager object at 0x7f745363d5e0>, 'loader': <ansible.parsing.datalo...ass': None}, 'stdout_callback': <pytest_ansible.module_dispatcher.v28.ResultAccumulator object at 0x7f7455a784f0>, ...}
module_args = ('show platform summary',)
no_hosts   = False
play       = pytest-ansible
play_ds    = {'become': True, 'become_user': 'root', 'gather_facts': 'no', 'hosts': 'vlab-c-01', ...}
self       = <pytest_ansible.module_dispatcher.v28.ModuleDispatcherV28 object at 0x7f745372aeb0>
tqm        = <ansible.executor.task_queue_manager.TaskQueueManager object at 0x7f74536b72e0>
verbosity  = None
verbosity_syntax = '-vvvvv'

/home/ubuntu/env-python3/lib/python3.8/site-packages/pytest_ansible/module_dispatcher/v28.py:159: AnsibleConnectionFailure

During handling of the above exception, another exception occurred:

enhance_inventory = None
ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
tbinfo = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}
request = <SubRequest 'duthosts' for <Function test_interface_on_each_node>>

    @pytest.fixture(name="duthosts", scope="session")
    def fixture_duthosts(enhance_inventory, ansible_adhoc, tbinfo, request):
        """
        @summary: fixture to get DUT hosts defined in testbed.
        @param ansible_adhoc: Fixture provided by the pytest-ansible package.
            Source of the various device objects. It is
            mandatory argument for the class constructors.
        @param tbinfo: fixture provides information about testbed.
        """
        try:
            host = DutHosts(ansible_adhoc, tbinfo, get_specified_duts(request))
            return host
        except BaseException as e:
            logger.error("Failed to initialize duthosts.")
            request.config.cache.set("duthosts_fixture_failed", True)
>           pt_assert(False, "!!!!!!!!!!!!!!!! duthosts fixture failed !!!!!!!!!!!!!!!!"
                      "Exception: {}".format(repr(e)))
E           Failed: !!!!!!!!!!!!!!!! duthosts fixture failed !!!!!!!!!!!!!!!!Exception: Host unreachable

ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
enhance_inventory = None
request    = <SubRequest 'duthosts' for <Function test_interface_on_each_node>>
tbinfo     = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}

conftest.py:368: Failed
__________ ERROR at setup of test_traffic_check_local_link_fail_case ___________

enhance_inventory = None
ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
tbinfo = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}
request = <SubRequest 'duthosts' for <Function test_interface_on_each_node>>

    @pytest.fixture(name="duthosts", scope="session")
    def fixture_duthosts(enhance_inventory, ansible_adhoc, tbinfo, request):
        """
        @summary: fixture to get DUT hosts defined in testbed.
        @param ansible_adhoc: Fixture provided by the pytest-ansible package.
            Source of the various device objects. It is
            mandatory argument for the class constructors.
        @param tbinfo: fixture provides information about testbed.
        """
        try:
>           host = DutHosts(ansible_adhoc, tbinfo, get_specified_duts(request))

ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
enhance_inventory = None
request    = <SubRequest 'duthosts' for <Function test_interface_on_each_node>>
tbinfo     = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}

conftest.py:363: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <[RecursionError('maximum recursion depth exceeded') raised in repr()] DutHosts object at 0x7f745363d550>
ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
tbinfo = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}
duts = ['vlab-c-01']

    def __init__(self, ansible_adhoc, tbinfo, duts):
        """ Initialize a multi-dut testbed with all the DUT's defined in testbed info.
    
        Args:
            ansible_adhoc: The pytest-ansible fixture
            tbinfo - Testbed info whose "duts" holds the hostnames for the DUT's in the multi-dut testbed.
            duts - list of DUT hostnames from the `--host-pattern` CLI option. Can be specified if only a subset of
                   DUTs in the testbed should be used
    
        """
        self.ansible_adhoc = ansible_adhoc
        self.tbinfo = tbinfo
        self.duts = duts
>       self.__initialize_nodes()

ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
duts       = ['vlab-c-01']
self       = <[RecursionError('maximum recursion depth exceeded') raised in repr()] DutHosts object at 0x7f745363d550>
tbinfo     = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}

common/devices/duthosts.py:60: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <[RecursionError('maximum recursion depth exceeded') raised in repr()] DutHosts object at 0x7f745363d550>

    def __initialize_nodes(self):
        # TODO: Initialize the nodes in parallel using multi-threads?
>       self.nodes = self._Nodes([MultiAsicSonicHost(self.ansible_adhoc, hostname, self, self.tbinfo['topo']['type'])
                                  for hostname in self.tbinfo["duts"] if hostname in self.duts])

self       = <[RecursionError('maximum recursion depth exceeded') raised in repr()] DutHosts object at 0x7f745363d550>

common/devices/duthosts.py:64: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

.0 = <list_iterator object at 0x7f745363d580>

>   self.nodes = self._Nodes([MultiAsicSonicHost(self.ansible_adhoc, hostname, self, self.tbinfo['topo']['type'])
                              for hostname in self.tbinfo["duts"] if hostname in self.duts])

.0         = <list_iterator object at 0x7f745363d580>
hostname   = 'vlab-c-01'
self       = <[RecursionError('maximum recursion depth exceeded') raised in repr()] DutHosts object at 0x7f745363d550>

common/devices/duthosts.py:64: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <[RecursionError('maximum recursion depth exceeded while calling a Python object') raised in repr()] MultiAsicSonicHost object at 0x7f745363df70>
ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
hostname = 'vlab-c-01'
duthosts = <[RecursionError('maximum recursion depth exceeded') raised in repr()] DutHosts object at 0x7f745363d550>
topo_type = 'ciscovs'

    def __init__(self, ansible_adhoc, hostname, duthosts, topo_type):
        """ Initializing a MultiAsicSonicHost.
    
        Args:
            ansible_adhoc : The pytest-ansible fixture
            hostname: Name of the host in the ansible inventory
        """
        self.duthosts = duthosts
        self.topo_type = topo_type
        self.loganalyzer = None
>       self.sonichost = SonicHost(ansible_adhoc, hostname)

ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
duthosts   = <[RecursionError('maximum recursion depth exceeded') raised in repr()] DutHosts object at 0x7f745363d550>
hostname   = 'vlab-c-01'
self       = <[RecursionError('maximum recursion depth exceeded while calling a Python object') raised in repr()] MultiAsicSonicHost object at 0x7f745363df70>
topo_type  = 'ciscovs'

common/devices/multi_asic.py:36: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <SonicHost vlab-c-01>
ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
hostname = 'vlab-c-01', shell_user = None, shell_passwd = None, ssh_user = None
ssh_passwd = None

    def __init__(self, ansible_adhoc, hostname,
                 shell_user=None, shell_passwd=None,
                 ssh_user=None, ssh_passwd=None):
        AnsibleHostBase.__init__(self, ansible_adhoc, hostname)
    
        self.DEFAULT_ASIC_SERVICES = ["bgp", "database", "lldp", "swss", "syncd", "teamd"]
    
        if shell_user and shell_passwd:
            im = self.host.options['inventory_manager']
            vm = self.host.options['variable_manager']
            sonic_conn = vm.get_vars(
                host=im.get_hosts(pattern='sonic')[0]
            )['ansible_connection']
            hostvars = vm.get_vars(host=im.get_host(hostname=self.hostname))
            # parse connection options and reset those options with
            # passed credentials
            connection_loader.get(sonic_conn, class_only=True)
            user_def = ansible_constants.config.get_configuration_definition(
                "remote_user", "connection", sonic_conn
            )
            pass_def = ansible_constants.config.get_configuration_definition(
                "password", "connection", sonic_conn
            )
            for user_var in (_['name'] for _ in user_def['vars']):
                if user_var in hostvars:
                    vm.extra_vars.update({user_var: shell_user})
            for pass_var in (_['name'] for _ in pass_def['vars']):
                if pass_var in hostvars:
                    vm.extra_vars.update({pass_var: shell_passwd})
    
        if ssh_user and ssh_passwd:
            evars = {
                'ansible_ssh_user': ssh_user,
                'ansible_ssh_pass': ssh_passwd,
            }
            self.host.options['variable_manager'].extra_vars.update(evars)
    
>       self._facts = self._gather_facts()

ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
hostname   = 'vlab-c-01'
self       = <SonicHost vlab-c-01>
shell_passwd = None
shell_user = None
ssh_passwd = None
ssh_user   = None

common/devices/sonic.py:86: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

args = (<SonicHost vlab-c-01>,), kargs = {}
_zone_getter = <function _get_default_zone at 0x7f74558a1550>
zone = 'vlab-c-01', cached_facts = <object object at 0x7f7459cf5770>

    def wrapper(*args, **kargs):
        _zone_getter = zone_getter or _get_default_zone
        zone = _zone_getter(target, args, kargs)
    
        cached_facts = cache.read(zone, name)
        if after_read:
            cached_facts = after_read(cached_facts, target, args, kargs)
        if cached_facts is not FactsCache.NOTEXIST:
            return cached_facts
        else:
>           facts = target(*args, **kargs)

_zone_getter = <function _get_default_zone at 0x7f74558a1550>
after_read = None
args       = (<SonicHost vlab-c-01>,)
before_write = None
cache      = <tests.common.cache.facts_cache.FactsCache object at 0x7f74568f7580>
cached_facts = <object object at 0x7f7459cf5770>
kargs      = {}
name       = 'basic_facts'
target     = <function SonicHost._gather_facts at 0x7f7453c98550>
zone       = 'vlab-c-01'
zone_getter = None

common/cache/facts_cache.py:228: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <SonicHost vlab-c-01>

    @cached(name='basic_facts')
    def _gather_facts(self):
        """
        Gather facts about the platform for this SONiC device.
        """
>       facts = self._get_platform_info()

self       = <SonicHost vlab-c-01>

common/devices/sonic.py:199: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <SonicHost vlab-c-01>

    def _get_platform_info(self):
        """
        Gets platform information about this SONiC device.
        """
    
>       platform_info = self.command("show platform summary")["stdout_lines"]

self       = <SonicHost vlab-c-01>

common/devices/sonic.py:311: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <SonicHost vlab-c-01>, module_args = ['show platform summary']
complex_args = {}
previous_frame = <frame at 0x7f74536ff840, file '/data/sonic-mgmt/tests/common/devices/sonic.py', line 311, code _get_platform_info>
filename = '/data/sonic-mgmt/tests/common/devices/sonic.py', line_number = 311
function_name = '_get_platform_info'
lines = ['        platform_info = self.command("show platform summary")["stdout_lines"]\n']
index = 0, verbose = True, module_ignore_errors = False, module_async = False

    def _run(self, *module_args, **complex_args):
    
        previous_frame = inspect.currentframe().f_back
        filename, line_number, function_name, lines, index = inspect.getframeinfo(previous_frame)
    
        verbose = complex_args.pop('verbose', True)
    
        if verbose:
            logger.debug(
                "{}::{}#{}: [{}] AnsibleModule::{}, args={}, kwargs={}".format(
                    filename,
                    function_name,
                    line_number,
                    self.hostname,
                    self.module_name,
                    json.dumps(module_args, cls=AnsibleHostBase.CustomEncoder),
                    json.dumps(complex_args, cls=AnsibleHostBase.CustomEncoder)
                )
            )
        else:
            logger.debug(
                "{}::{}#{}: [{}] AnsibleModule::{} executing...".format(
                    filename,
                    function_name,
                    line_number,
                    self.hostname,
                    self.module_name
                )
            )
    
        module_ignore_errors = complex_args.pop('module_ignore_errors', False)
        module_async = complex_args.pop('module_async', False)
    
        if module_async:
            def run_module(module_args, complex_args):
                return self.module(*module_args, **complex_args)[self.hostname]
            pool = ThreadPool()
            result = pool.apply_async(run_module, (module_args, complex_args))
            return pool, result
    
        module_args = json.loads(json.dumps(module_args, cls=AnsibleHostBase.CustomEncoder))
        complex_args = json.loads(json.dumps(complex_args, cls=AnsibleHostBase.CustomEncoder))
>       res = self.module(*module_args, **complex_args)[self.hostname]

complex_args = {}
filename   = '/data/sonic-mgmt/tests/common/devices/sonic.py'
function_name = '_get_platform_info'
index      = 0
line_number = 311
lines      = ['        platform_info = self.command("show platform summary")["stdout_lines"]\n']
module_args = ['show platform summary']
module_async = False
module_ignore_errors = False
previous_frame = <frame at 0x7f74536ff840, file '/data/sonic-mgmt/tests/common/devices/sonic.py', line 311, code _get_platform_info>
self       = <SonicHost vlab-c-01>
verbose    = True

common/devices/base.py:105: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <pytest_ansible.module_dispatcher.v28.ModuleDispatcherV28 object at 0x7f745372aeb0>
module_args = ('show platform summary',)
complex_args = {'_raw_params': 'show platform summary'}, hosts = [vlab-c-01]
no_hosts = False
args = ['pytest-ansible', 'vlab-c-01', '--connection=smart', '--become', '--become-method=sudo', '--become-user=root', ...]
verbosity = None, verbosity_syntax = '-vvvvv', argument = 'module-path'
arg_value = ['/data/sonic-mgmt/ansible/library']
cb = <pytest_ansible.module_dispatcher.v28.ResultAccumulator object at 0x7f7455a784f0>
kwargs = {'inventory': <ansible.inventory.manager.InventoryManager object at 0x7f745363d5e0>, 'loader': <ansible.parsing.datalo...ass': None}, 'stdout_callback': <pytest_ansible.module_dispatcher.v28.ResultAccumulator object at 0x7f7455a784f0>, ...}

    def _run(self, *module_args, **complex_args):
        """Execute an ansible adhoc command returning the result in a AdhocResult object."""
        # Assemble module argument string
        if module_args:
            complex_args.update(dict(_raw_params=' '.join(module_args)))
    
        # Assert hosts matching the provided pattern exist
        hosts = self.options['inventory_manager'].list_hosts()
        no_hosts = False
        if len(hosts) == 0:
            no_hosts = True
            warnings.warn("provided hosts list is empty, only localhost is available")
    
        self.options['inventory_manager'].subset(self.options.get('subset'))
        hosts = self.options['inventory_manager'].list_hosts(self.options['host_pattern'])
        if len(hosts) == 0 and not no_hosts:
            raise ansible.errors.AnsibleError("Specified hosts and/or --limit does not match any hosts")
    
        # Pass along cli options
        args = ['pytest-ansible']
        verbosity = None
        for verbosity_syntax in ('-v', '-vv', '-vvv', '-vvvv', '-vvvvv'):
            if verbosity_syntax in sys.argv:
                verbosity = verbosity_syntax
                break
        if verbosity is not None:
            args.append(verbosity_syntax)
        args.extend([self.options['host_pattern']])
        for argument in ('connection', 'user', 'become', 'become_method', 'become_user', 'module_path'):
            arg_value = self.options.get(argument)
            argument = argument.replace('_', '-')
    
            if arg_value in (None, False):
                continue
    
            if arg_value is True:
                args.append('--{0}'.format(argument))
            else:
                args.append('--{0}={1}'.format(argument, arg_value))
    
        # Use Ansible's own adhoc cli to parse the fake command line we created and then save it
        # into Ansible's global context
        adhoc = AdHocCLI(args)
        adhoc.parse()
    
        # And now we'll never speak of this again
        del adhoc
    
        # Initialize callback to capture module JSON responses
        cb = ResultAccumulator()
    
        kwargs = dict(
            inventory=self.options['inventory_manager'],
            variable_manager=self.options['variable_manager'],
            loader=self.options['loader'],
            stdout_callback=cb,
            passwords=dict(conn_pass=None, become_pass=None),
        )
    
        # create a pseudo-play to execute the specified module via a single task
        play_ds = dict(
            name="pytest-ansible",
            hosts=self.options['host_pattern'],
            become=self.options.get('become'),
            become_user=self.options.get('become_user'),
            gather_facts='no',
            tasks=[
                dict(
                    action=dict(
                        module=self.options['module_name'], args=complex_args
                    ),
                ),
            ]
        )
        play = Play().load(play_ds, variable_manager=self.options['variable_manager'], loader=self.options['loader'])
    
        # now create a task queue manager to execute the play
        tqm = None
        try:
            tqm = TaskQueueManager(**kwargs)
            tqm.run(play)
        finally:
            if tqm:
                tqm.cleanup()
    
    
        # Raise exception if host(s) unreachable
        # FIXME - if multiple hosts were involved, should an exception be raised?
        if cb.unreachable:
>           raise AnsibleConnectionFailure("Host unreachable", dark=cb.unreachable, contacted=cb.contacted)
E           pytest_ansible.errors.AnsibleConnectionFailure: Host unreachable

arg_value  = ['/data/sonic-mgmt/ansible/library']
args       = ['pytest-ansible', 'vlab-c-01', '--connection=smart', '--become', '--become-method=sudo', '--become-user=root', ...]
argument   = 'module-path'
cb         = <pytest_ansible.module_dispatcher.v28.ResultAccumulator object at 0x7f7455a784f0>
complex_args = {'_raw_params': 'show platform summary'}
hosts      = [vlab-c-01]
kwargs     = {'inventory': <ansible.inventory.manager.InventoryManager object at 0x7f745363d5e0>, 'loader': <ansible.parsing.datalo...ass': None}, 'stdout_callback': <pytest_ansible.module_dispatcher.v28.ResultAccumulator object at 0x7f7455a784f0>, ...}
module_args = ('show platform summary',)
no_hosts   = False
play       = pytest-ansible
play_ds    = {'become': True, 'become_user': 'root', 'gather_facts': 'no', 'hosts': 'vlab-c-01', ...}
self       = <pytest_ansible.module_dispatcher.v28.ModuleDispatcherV28 object at 0x7f745372aeb0>
tqm        = <ansible.executor.task_queue_manager.TaskQueueManager object at 0x7f74536b72e0>
verbosity  = None
verbosity_syntax = '-vvvvv'

/home/ubuntu/env-python3/lib/python3.8/site-packages/pytest_ansible/module_dispatcher/v28.py:159: AnsibleConnectionFailure

During handling of the above exception, another exception occurred:

enhance_inventory = None
ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
tbinfo = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}
request = <SubRequest 'duthosts' for <Function test_interface_on_each_node>>

    @pytest.fixture(name="duthosts", scope="session")
    def fixture_duthosts(enhance_inventory, ansible_adhoc, tbinfo, request):
        """
        @summary: fixture to get DUT hosts defined in testbed.
        @param ansible_adhoc: Fixture provided by the pytest-ansible package.
            Source of the various device objects. It is
            mandatory argument for the class constructors.
        @param tbinfo: fixture provides information about testbed.
        """
        try:
            host = DutHosts(ansible_adhoc, tbinfo, get_specified_duts(request))
            return host
        except BaseException as e:
            logger.error("Failed to initialize duthosts.")
            request.config.cache.set("duthosts_fixture_failed", True)
>           pt_assert(False, "!!!!!!!!!!!!!!!! duthosts fixture failed !!!!!!!!!!!!!!!!"
                      "Exception: {}".format(repr(e)))
E           Failed: !!!!!!!!!!!!!!!! duthosts fixture failed !!!!!!!!!!!!!!!!Exception: Host unreachable

ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
enhance_inventory = None
request    = <SubRequest 'duthosts' for <Function test_interface_on_each_node>>
tbinfo     = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}

conftest.py:368: Failed
__________ ERROR at setup of test_traffic_check_remote_igp_fail_case ___________

enhance_inventory = None
ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
tbinfo = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}
request = <SubRequest 'duthosts' for <Function test_interface_on_each_node>>

    @pytest.fixture(name="duthosts", scope="session")
    def fixture_duthosts(enhance_inventory, ansible_adhoc, tbinfo, request):
        """
        @summary: fixture to get DUT hosts defined in testbed.
        @param ansible_adhoc: Fixture provided by the pytest-ansible package.
            Source of the various device objects. It is
            mandatory argument for the class constructors.
        @param tbinfo: fixture provides information about testbed.
        """
        try:
>           host = DutHosts(ansible_adhoc, tbinfo, get_specified_duts(request))

ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
enhance_inventory = None
request    = <SubRequest 'duthosts' for <Function test_interface_on_each_node>>
tbinfo     = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}

conftest.py:363: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <[RecursionError('maximum recursion depth exceeded') raised in repr()] DutHosts object at 0x7f745363d550>
ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
tbinfo = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}
duts = ['vlab-c-01']

    def __init__(self, ansible_adhoc, tbinfo, duts):
        """ Initialize a multi-dut testbed with all the DUT's defined in testbed info.
    
        Args:
            ansible_adhoc: The pytest-ansible fixture
            tbinfo - Testbed info whose "duts" holds the hostnames for the DUT's in the multi-dut testbed.
            duts - list of DUT hostnames from the `--host-pattern` CLI option. Can be specified if only a subset of
                   DUTs in the testbed should be used
    
        """
        self.ansible_adhoc = ansible_adhoc
        self.tbinfo = tbinfo
        self.duts = duts
>       self.__initialize_nodes()

ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
duts       = ['vlab-c-01']
self       = <[RecursionError('maximum recursion depth exceeded') raised in repr()] DutHosts object at 0x7f745363d550>
tbinfo     = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}

common/devices/duthosts.py:60: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <[RecursionError('maximum recursion depth exceeded') raised in repr()] DutHosts object at 0x7f745363d550>

    def __initialize_nodes(self):
        # TODO: Initialize the nodes in parallel using multi-threads?
>       self.nodes = self._Nodes([MultiAsicSonicHost(self.ansible_adhoc, hostname, self, self.tbinfo['topo']['type'])
                                  for hostname in self.tbinfo["duts"] if hostname in self.duts])

self       = <[RecursionError('maximum recursion depth exceeded') raised in repr()] DutHosts object at 0x7f745363d550>

common/devices/duthosts.py:64: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

.0 = <list_iterator object at 0x7f745363d580>

>   self.nodes = self._Nodes([MultiAsicSonicHost(self.ansible_adhoc, hostname, self, self.tbinfo['topo']['type'])
                              for hostname in self.tbinfo["duts"] if hostname in self.duts])

.0         = <list_iterator object at 0x7f745363d580>
hostname   = 'vlab-c-01'
self       = <[RecursionError('maximum recursion depth exceeded') raised in repr()] DutHosts object at 0x7f745363d550>

common/devices/duthosts.py:64: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <[RecursionError('maximum recursion depth exceeded while calling a Python object') raised in repr()] MultiAsicSonicHost object at 0x7f745363df70>
ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
hostname = 'vlab-c-01'
duthosts = <[RecursionError('maximum recursion depth exceeded') raised in repr()] DutHosts object at 0x7f745363d550>
topo_type = 'ciscovs'

    def __init__(self, ansible_adhoc, hostname, duthosts, topo_type):
        """ Initializing a MultiAsicSonicHost.
    
        Args:
            ansible_adhoc : The pytest-ansible fixture
            hostname: Name of the host in the ansible inventory
        """
        self.duthosts = duthosts
        self.topo_type = topo_type
        self.loganalyzer = None
>       self.sonichost = SonicHost(ansible_adhoc, hostname)

ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
duthosts   = <[RecursionError('maximum recursion depth exceeded') raised in repr()] DutHosts object at 0x7f745363d550>
hostname   = 'vlab-c-01'
self       = <[RecursionError('maximum recursion depth exceeded while calling a Python object') raised in repr()] MultiAsicSonicHost object at 0x7f745363df70>
topo_type  = 'ciscovs'

common/devices/multi_asic.py:36: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <SonicHost vlab-c-01>
ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
hostname = 'vlab-c-01', shell_user = None, shell_passwd = None, ssh_user = None
ssh_passwd = None

    def __init__(self, ansible_adhoc, hostname,
                 shell_user=None, shell_passwd=None,
                 ssh_user=None, ssh_passwd=None):
        AnsibleHostBase.__init__(self, ansible_adhoc, hostname)
    
        self.DEFAULT_ASIC_SERVICES = ["bgp", "database", "lldp", "swss", "syncd", "teamd"]
    
        if shell_user and shell_passwd:
            im = self.host.options['inventory_manager']
            vm = self.host.options['variable_manager']
            sonic_conn = vm.get_vars(
                host=im.get_hosts(pattern='sonic')[0]
            )['ansible_connection']
            hostvars = vm.get_vars(host=im.get_host(hostname=self.hostname))
            # parse connection options and reset those options with
            # passed credentials
            connection_loader.get(sonic_conn, class_only=True)
            user_def = ansible_constants.config.get_configuration_definition(
                "remote_user", "connection", sonic_conn
            )
            pass_def = ansible_constants.config.get_configuration_definition(
                "password", "connection", sonic_conn
            )
            for user_var in (_['name'] for _ in user_def['vars']):
                if user_var in hostvars:
                    vm.extra_vars.update({user_var: shell_user})
            for pass_var in (_['name'] for _ in pass_def['vars']):
                if pass_var in hostvars:
                    vm.extra_vars.update({pass_var: shell_passwd})
    
        if ssh_user and ssh_passwd:
            evars = {
                'ansible_ssh_user': ssh_user,
                'ansible_ssh_pass': ssh_passwd,
            }
            self.host.options['variable_manager'].extra_vars.update(evars)
    
>       self._facts = self._gather_facts()

ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
hostname   = 'vlab-c-01'
self       = <SonicHost vlab-c-01>
shell_passwd = None
shell_user = None
ssh_passwd = None
ssh_user   = None

common/devices/sonic.py:86: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

args = (<SonicHost vlab-c-01>,), kargs = {}
_zone_getter = <function _get_default_zone at 0x7f74558a1550>
zone = 'vlab-c-01', cached_facts = <object object at 0x7f7459cf5770>

    def wrapper(*args, **kargs):
        _zone_getter = zone_getter or _get_default_zone
        zone = _zone_getter(target, args, kargs)
    
        cached_facts = cache.read(zone, name)
        if after_read:
            cached_facts = after_read(cached_facts, target, args, kargs)
        if cached_facts is not FactsCache.NOTEXIST:
            return cached_facts
        else:
>           facts = target(*args, **kargs)

_zone_getter = <function _get_default_zone at 0x7f74558a1550>
after_read = None
args       = (<SonicHost vlab-c-01>,)
before_write = None
cache      = <tests.common.cache.facts_cache.FactsCache object at 0x7f74568f7580>
cached_facts = <object object at 0x7f7459cf5770>
kargs      = {}
name       = 'basic_facts'
target     = <function SonicHost._gather_facts at 0x7f7453c98550>
zone       = 'vlab-c-01'
zone_getter = None

common/cache/facts_cache.py:228: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <SonicHost vlab-c-01>

    @cached(name='basic_facts')
    def _gather_facts(self):
        """
        Gather facts about the platform for this SONiC device.
        """
>       facts = self._get_platform_info()

self       = <SonicHost vlab-c-01>

common/devices/sonic.py:199: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <SonicHost vlab-c-01>

    def _get_platform_info(self):
        """
        Gets platform information about this SONiC device.
        """
    
>       platform_info = self.command("show platform summary")["stdout_lines"]

self       = <SonicHost vlab-c-01>

common/devices/sonic.py:311: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <SonicHost vlab-c-01>, module_args = ['show platform summary']
complex_args = {}
previous_frame = <frame at 0x7f74536ff840, file '/data/sonic-mgmt/tests/common/devices/sonic.py', line 311, code _get_platform_info>
filename = '/data/sonic-mgmt/tests/common/devices/sonic.py', line_number = 311
function_name = '_get_platform_info'
lines = ['        platform_info = self.command("show platform summary")["stdout_lines"]\n']
index = 0, verbose = True, module_ignore_errors = False, module_async = False

    def _run(self, *module_args, **complex_args):
    
        previous_frame = inspect.currentframe().f_back
        filename, line_number, function_name, lines, index = inspect.getframeinfo(previous_frame)
    
        verbose = complex_args.pop('verbose', True)
    
        if verbose:
            logger.debug(
                "{}::{}#{}: [{}] AnsibleModule::{}, args={}, kwargs={}".format(
                    filename,
                    function_name,
                    line_number,
                    self.hostname,
                    self.module_name,
                    json.dumps(module_args, cls=AnsibleHostBase.CustomEncoder),
                    json.dumps(complex_args, cls=AnsibleHostBase.CustomEncoder)
                )
            )
        else:
            logger.debug(
                "{}::{}#{}: [{}] AnsibleModule::{} executing...".format(
                    filename,
                    function_name,
                    line_number,
                    self.hostname,
                    self.module_name
                )
            )
    
        module_ignore_errors = complex_args.pop('module_ignore_errors', False)
        module_async = complex_args.pop('module_async', False)
    
        if module_async:
            def run_module(module_args, complex_args):
                return self.module(*module_args, **complex_args)[self.hostname]
            pool = ThreadPool()
            result = pool.apply_async(run_module, (module_args, complex_args))
            return pool, result
    
        module_args = json.loads(json.dumps(module_args, cls=AnsibleHostBase.CustomEncoder))
        complex_args = json.loads(json.dumps(complex_args, cls=AnsibleHostBase.CustomEncoder))
>       res = self.module(*module_args, **complex_args)[self.hostname]

complex_args = {}
filename   = '/data/sonic-mgmt/tests/common/devices/sonic.py'
function_name = '_get_platform_info'
index      = 0
line_number = 311
lines      = ['        platform_info = self.command("show platform summary")["stdout_lines"]\n']
module_args = ['show platform summary']
module_async = False
module_ignore_errors = False
previous_frame = <frame at 0x7f74536ff840, file '/data/sonic-mgmt/tests/common/devices/sonic.py', line 311, code _get_platform_info>
self       = <SonicHost vlab-c-01>
verbose    = True

common/devices/base.py:105: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <pytest_ansible.module_dispatcher.v28.ModuleDispatcherV28 object at 0x7f745372aeb0>
module_args = ('show platform summary',)
complex_args = {'_raw_params': 'show platform summary'}, hosts = [vlab-c-01]
no_hosts = False
args = ['pytest-ansible', 'vlab-c-01', '--connection=smart', '--become', '--become-method=sudo', '--become-user=root', ...]
verbosity = None, verbosity_syntax = '-vvvvv', argument = 'module-path'
arg_value = ['/data/sonic-mgmt/ansible/library']
cb = <pytest_ansible.module_dispatcher.v28.ResultAccumulator object at 0x7f7455a784f0>
kwargs = {'inventory': <ansible.inventory.manager.InventoryManager object at 0x7f745363d5e0>, 'loader': <ansible.parsing.datalo...ass': None}, 'stdout_callback': <pytest_ansible.module_dispatcher.v28.ResultAccumulator object at 0x7f7455a784f0>, ...}

    def _run(self, *module_args, **complex_args):
        """Execute an ansible adhoc command returning the result in a AdhocResult object."""
        # Assemble module argument string
        if module_args:
            complex_args.update(dict(_raw_params=' '.join(module_args)))
    
        # Assert hosts matching the provided pattern exist
        hosts = self.options['inventory_manager'].list_hosts()
        no_hosts = False
        if len(hosts) == 0:
            no_hosts = True
            warnings.warn("provided hosts list is empty, only localhost is available")
    
        self.options['inventory_manager'].subset(self.options.get('subset'))
        hosts = self.options['inventory_manager'].list_hosts(self.options['host_pattern'])
        if len(hosts) == 0 and not no_hosts:
            raise ansible.errors.AnsibleError("Specified hosts and/or --limit does not match any hosts")
    
        # Pass along cli options
        args = ['pytest-ansible']
        verbosity = None
        for verbosity_syntax in ('-v', '-vv', '-vvv', '-vvvv', '-vvvvv'):
            if verbosity_syntax in sys.argv:
                verbosity = verbosity_syntax
                break
        if verbosity is not None:
            args.append(verbosity_syntax)
        args.extend([self.options['host_pattern']])
        for argument in ('connection', 'user', 'become', 'become_method', 'become_user', 'module_path'):
            arg_value = self.options.get(argument)
            argument = argument.replace('_', '-')
    
            if arg_value in (None, False):
                continue
    
            if arg_value is True:
                args.append('--{0}'.format(argument))
            else:
                args.append('--{0}={1}'.format(argument, arg_value))
    
        # Use Ansible's own adhoc cli to parse the fake command line we created and then save it
        # into Ansible's global context
        adhoc = AdHocCLI(args)
        adhoc.parse()
    
        # And now we'll never speak of this again
        del adhoc
    
        # Initialize callback to capture module JSON responses
        cb = ResultAccumulator()
    
        kwargs = dict(
            inventory=self.options['inventory_manager'],
            variable_manager=self.options['variable_manager'],
            loader=self.options['loader'],
            stdout_callback=cb,
            passwords=dict(conn_pass=None, become_pass=None),
        )
    
        # create a pseudo-play to execute the specified module via a single task
        play_ds = dict(
            name="pytest-ansible",
            hosts=self.options['host_pattern'],
            become=self.options.get('become'),
            become_user=self.options.get('become_user'),
            gather_facts='no',
            tasks=[
                dict(
                    action=dict(
                        module=self.options['module_name'], args=complex_args
                    ),
                ),
            ]
        )
        play = Play().load(play_ds, variable_manager=self.options['variable_manager'], loader=self.options['loader'])
    
        # now create a task queue manager to execute the play
        tqm = None
        try:
            tqm = TaskQueueManager(**kwargs)
            tqm.run(play)
        finally:
            if tqm:
                tqm.cleanup()
    
    
        # Raise exception if host(s) unreachable
        # FIXME - if multiple hosts were involved, should an exception be raised?
        if cb.unreachable:
>           raise AnsibleConnectionFailure("Host unreachable", dark=cb.unreachable, contacted=cb.contacted)
E           pytest_ansible.errors.AnsibleConnectionFailure: Host unreachable

arg_value  = ['/data/sonic-mgmt/ansible/library']
args       = ['pytest-ansible', 'vlab-c-01', '--connection=smart', '--become', '--become-method=sudo', '--become-user=root', ...]
argument   = 'module-path'
cb         = <pytest_ansible.module_dispatcher.v28.ResultAccumulator object at 0x7f7455a784f0>
complex_args = {'_raw_params': 'show platform summary'}
hosts      = [vlab-c-01]
kwargs     = {'inventory': <ansible.inventory.manager.InventoryManager object at 0x7f745363d5e0>, 'loader': <ansible.parsing.datalo...ass': None}, 'stdout_callback': <pytest_ansible.module_dispatcher.v28.ResultAccumulator object at 0x7f7455a784f0>, ...}
module_args = ('show platform summary',)
no_hosts   = False
play       = pytest-ansible
play_ds    = {'become': True, 'become_user': 'root', 'gather_facts': 'no', 'hosts': 'vlab-c-01', ...}
self       = <pytest_ansible.module_dispatcher.v28.ModuleDispatcherV28 object at 0x7f745372aeb0>
tqm        = <ansible.executor.task_queue_manager.TaskQueueManager object at 0x7f74536b72e0>
verbosity  = None
verbosity_syntax = '-vvvvv'

/home/ubuntu/env-python3/lib/python3.8/site-packages/pytest_ansible/module_dispatcher/v28.py:159: AnsibleConnectionFailure

During handling of the above exception, another exception occurred:

enhance_inventory = None
ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
tbinfo = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}
request = <SubRequest 'duthosts' for <Function test_interface_on_each_node>>

    @pytest.fixture(name="duthosts", scope="session")
    def fixture_duthosts(enhance_inventory, ansible_adhoc, tbinfo, request):
        """
        @summary: fixture to get DUT hosts defined in testbed.
        @param ansible_adhoc: Fixture provided by the pytest-ansible package.
            Source of the various device objects. It is
            mandatory argument for the class constructors.
        @param tbinfo: fixture provides information about testbed.
        """
        try:
            host = DutHosts(ansible_adhoc, tbinfo, get_specified_duts(request))
            return host
        except BaseException as e:
            logger.error("Failed to initialize duthosts.")
            request.config.cache.set("duthosts_fixture_failed", True)
>           pt_assert(False, "!!!!!!!!!!!!!!!! duthosts fixture failed !!!!!!!!!!!!!!!!"
                      "Exception: {}".format(repr(e)))
E           Failed: !!!!!!!!!!!!!!!! duthosts fixture failed !!!!!!!!!!!!!!!!Exception: Host unreachable

ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
enhance_inventory = None
request    = <SubRequest 'duthosts' for <Function test_interface_on_each_node>>
tbinfo     = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}

conftest.py:368: Failed
__________ ERROR at setup of test_traffic_check_remote_bgp_fail_case ___________

enhance_inventory = None
ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
tbinfo = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}
request = <SubRequest 'duthosts' for <Function test_interface_on_each_node>>

    @pytest.fixture(name="duthosts", scope="session")
    def fixture_duthosts(enhance_inventory, ansible_adhoc, tbinfo, request):
        """
        @summary: fixture to get DUT hosts defined in testbed.
        @param ansible_adhoc: Fixture provided by the pytest-ansible package.
            Source of the various device objects. It is
            mandatory argument for the class constructors.
        @param tbinfo: fixture provides information about testbed.
        """
        try:
>           host = DutHosts(ansible_adhoc, tbinfo, get_specified_duts(request))

ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
enhance_inventory = None
request    = <SubRequest 'duthosts' for <Function test_interface_on_each_node>>
tbinfo     = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}

conftest.py:363: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <[RecursionError('maximum recursion depth exceeded') raised in repr()] DutHosts object at 0x7f745363d550>
ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
tbinfo = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}
duts = ['vlab-c-01']

    def __init__(self, ansible_adhoc, tbinfo, duts):
        """ Initialize a multi-dut testbed with all the DUT's defined in testbed info.
    
        Args:
            ansible_adhoc: The pytest-ansible fixture
            tbinfo - Testbed info whose "duts" holds the hostnames for the DUT's in the multi-dut testbed.
            duts - list of DUT hostnames from the `--host-pattern` CLI option. Can be specified if only a subset of
                   DUTs in the testbed should be used
    
        """
        self.ansible_adhoc = ansible_adhoc
        self.tbinfo = tbinfo
        self.duts = duts
>       self.__initialize_nodes()

ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
duts       = ['vlab-c-01']
self       = <[RecursionError('maximum recursion depth exceeded') raised in repr()] DutHosts object at 0x7f745363d550>
tbinfo     = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}

common/devices/duthosts.py:60: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <[RecursionError('maximum recursion depth exceeded') raised in repr()] DutHosts object at 0x7f745363d550>

    def __initialize_nodes(self):
        # TODO: Initialize the nodes in parallel using multi-threads?
>       self.nodes = self._Nodes([MultiAsicSonicHost(self.ansible_adhoc, hostname, self, self.tbinfo['topo']['type'])
                                  for hostname in self.tbinfo["duts"] if hostname in self.duts])

self       = <[RecursionError('maximum recursion depth exceeded') raised in repr()] DutHosts object at 0x7f745363d550>

common/devices/duthosts.py:64: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

.0 = <list_iterator object at 0x7f745363d580>

>   self.nodes = self._Nodes([MultiAsicSonicHost(self.ansible_adhoc, hostname, self, self.tbinfo['topo']['type'])
                              for hostname in self.tbinfo["duts"] if hostname in self.duts])

.0         = <list_iterator object at 0x7f745363d580>
hostname   = 'vlab-c-01'
self       = <[RecursionError('maximum recursion depth exceeded') raised in repr()] DutHosts object at 0x7f745363d550>

common/devices/duthosts.py:64: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <[RecursionError('maximum recursion depth exceeded while calling a Python object') raised in repr()] MultiAsicSonicHost object at 0x7f745363df70>
ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
hostname = 'vlab-c-01'
duthosts = <[RecursionError('maximum recursion depth exceeded') raised in repr()] DutHosts object at 0x7f745363d550>
topo_type = 'ciscovs'

    def __init__(self, ansible_adhoc, hostname, duthosts, topo_type):
        """ Initializing a MultiAsicSonicHost.
    
        Args:
            ansible_adhoc : The pytest-ansible fixture
            hostname: Name of the host in the ansible inventory
        """
        self.duthosts = duthosts
        self.topo_type = topo_type
        self.loganalyzer = None
>       self.sonichost = SonicHost(ansible_adhoc, hostname)

ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
duthosts   = <[RecursionError('maximum recursion depth exceeded') raised in repr()] DutHosts object at 0x7f745363d550>
hostname   = 'vlab-c-01'
self       = <[RecursionError('maximum recursion depth exceeded while calling a Python object') raised in repr()] MultiAsicSonicHost object at 0x7f745363df70>
topo_type  = 'ciscovs'

common/devices/multi_asic.py:36: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <SonicHost vlab-c-01>
ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
hostname = 'vlab-c-01', shell_user = None, shell_passwd = None, ssh_user = None
ssh_passwd = None

    def __init__(self, ansible_adhoc, hostname,
                 shell_user=None, shell_passwd=None,
                 ssh_user=None, ssh_passwd=None):
        AnsibleHostBase.__init__(self, ansible_adhoc, hostname)
    
        self.DEFAULT_ASIC_SERVICES = ["bgp", "database", "lldp", "swss", "syncd", "teamd"]
    
        if shell_user and shell_passwd:
            im = self.host.options['inventory_manager']
            vm = self.host.options['variable_manager']
            sonic_conn = vm.get_vars(
                host=im.get_hosts(pattern='sonic')[0]
            )['ansible_connection']
            hostvars = vm.get_vars(host=im.get_host(hostname=self.hostname))
            # parse connection options and reset those options with
            # passed credentials
            connection_loader.get(sonic_conn, class_only=True)
            user_def = ansible_constants.config.get_configuration_definition(
                "remote_user", "connection", sonic_conn
            )
            pass_def = ansible_constants.config.get_configuration_definition(
                "password", "connection", sonic_conn
            )
            for user_var in (_['name'] for _ in user_def['vars']):
                if user_var in hostvars:
                    vm.extra_vars.update({user_var: shell_user})
            for pass_var in (_['name'] for _ in pass_def['vars']):
                if pass_var in hostvars:
                    vm.extra_vars.update({pass_var: shell_passwd})
    
        if ssh_user and ssh_passwd:
            evars = {
                'ansible_ssh_user': ssh_user,
                'ansible_ssh_pass': ssh_passwd,
            }
            self.host.options['variable_manager'].extra_vars.update(evars)
    
>       self._facts = self._gather_facts()

ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
hostname   = 'vlab-c-01'
self       = <SonicHost vlab-c-01>
shell_passwd = None
shell_user = None
ssh_passwd = None
ssh_user   = None

common/devices/sonic.py:86: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

args = (<SonicHost vlab-c-01>,), kargs = {}
_zone_getter = <function _get_default_zone at 0x7f74558a1550>
zone = 'vlab-c-01', cached_facts = <object object at 0x7f7459cf5770>

    def wrapper(*args, **kargs):
        _zone_getter = zone_getter or _get_default_zone
        zone = _zone_getter(target, args, kargs)
    
        cached_facts = cache.read(zone, name)
        if after_read:
            cached_facts = after_read(cached_facts, target, args, kargs)
        if cached_facts is not FactsCache.NOTEXIST:
            return cached_facts
        else:
>           facts = target(*args, **kargs)

_zone_getter = <function _get_default_zone at 0x7f74558a1550>
after_read = None
args       = (<SonicHost vlab-c-01>,)
before_write = None
cache      = <tests.common.cache.facts_cache.FactsCache object at 0x7f74568f7580>
cached_facts = <object object at 0x7f7459cf5770>
kargs      = {}
name       = 'basic_facts'
target     = <function SonicHost._gather_facts at 0x7f7453c98550>
zone       = 'vlab-c-01'
zone_getter = None

common/cache/facts_cache.py:228: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <SonicHost vlab-c-01>

    @cached(name='basic_facts')
    def _gather_facts(self):
        """
        Gather facts about the platform for this SONiC device.
        """
>       facts = self._get_platform_info()

self       = <SonicHost vlab-c-01>

common/devices/sonic.py:199: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <SonicHost vlab-c-01>

    def _get_platform_info(self):
        """
        Gets platform information about this SONiC device.
        """
    
>       platform_info = self.command("show platform summary")["stdout_lines"]

self       = <SonicHost vlab-c-01>

common/devices/sonic.py:311: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <SonicHost vlab-c-01>, module_args = ['show platform summary']
complex_args = {}
previous_frame = <frame at 0x7f74536ff840, file '/data/sonic-mgmt/tests/common/devices/sonic.py', line 311, code _get_platform_info>
filename = '/data/sonic-mgmt/tests/common/devices/sonic.py', line_number = 311
function_name = '_get_platform_info'
lines = ['        platform_info = self.command("show platform summary")["stdout_lines"]\n']
index = 0, verbose = True, module_ignore_errors = False, module_async = False

    def _run(self, *module_args, **complex_args):
    
        previous_frame = inspect.currentframe().f_back
        filename, line_number, function_name, lines, index = inspect.getframeinfo(previous_frame)
    
        verbose = complex_args.pop('verbose', True)
    
        if verbose:
            logger.debug(
                "{}::{}#{}: [{}] AnsibleModule::{}, args={}, kwargs={}".format(
                    filename,
                    function_name,
                    line_number,
                    self.hostname,
                    self.module_name,
                    json.dumps(module_args, cls=AnsibleHostBase.CustomEncoder),
                    json.dumps(complex_args, cls=AnsibleHostBase.CustomEncoder)
                )
            )
        else:
            logger.debug(
                "{}::{}#{}: [{}] AnsibleModule::{} executing...".format(
                    filename,
                    function_name,
                    line_number,
                    self.hostname,
                    self.module_name
                )
            )
    
        module_ignore_errors = complex_args.pop('module_ignore_errors', False)
        module_async = complex_args.pop('module_async', False)
    
        if module_async:
            def run_module(module_args, complex_args):
                return self.module(*module_args, **complex_args)[self.hostname]
            pool = ThreadPool()
            result = pool.apply_async(run_module, (module_args, complex_args))
            return pool, result
    
        module_args = json.loads(json.dumps(module_args, cls=AnsibleHostBase.CustomEncoder))
        complex_args = json.loads(json.dumps(complex_args, cls=AnsibleHostBase.CustomEncoder))
>       res = self.module(*module_args, **complex_args)[self.hostname]

complex_args = {}
filename   = '/data/sonic-mgmt/tests/common/devices/sonic.py'
function_name = '_get_platform_info'
index      = 0
line_number = 311
lines      = ['        platform_info = self.command("show platform summary")["stdout_lines"]\n']
module_args = ['show platform summary']
module_async = False
module_ignore_errors = False
previous_frame = <frame at 0x7f74536ff840, file '/data/sonic-mgmt/tests/common/devices/sonic.py', line 311, code _get_platform_info>
self       = <SonicHost vlab-c-01>
verbose    = True

common/devices/base.py:105: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <pytest_ansible.module_dispatcher.v28.ModuleDispatcherV28 object at 0x7f745372aeb0>
module_args = ('show platform summary',)
complex_args = {'_raw_params': 'show platform summary'}, hosts = [vlab-c-01]
no_hosts = False
args = ['pytest-ansible', 'vlab-c-01', '--connection=smart', '--become', '--become-method=sudo', '--become-user=root', ...]
verbosity = None, verbosity_syntax = '-vvvvv', argument = 'module-path'
arg_value = ['/data/sonic-mgmt/ansible/library']
cb = <pytest_ansible.module_dispatcher.v28.ResultAccumulator object at 0x7f7455a784f0>
kwargs = {'inventory': <ansible.inventory.manager.InventoryManager object at 0x7f745363d5e0>, 'loader': <ansible.parsing.datalo...ass': None}, 'stdout_callback': <pytest_ansible.module_dispatcher.v28.ResultAccumulator object at 0x7f7455a784f0>, ...}

    def _run(self, *module_args, **complex_args):
        """Execute an ansible adhoc command returning the result in a AdhocResult object."""
        # Assemble module argument string
        if module_args:
            complex_args.update(dict(_raw_params=' '.join(module_args)))
    
        # Assert hosts matching the provided pattern exist
        hosts = self.options['inventory_manager'].list_hosts()
        no_hosts = False
        if len(hosts) == 0:
            no_hosts = True
            warnings.warn("provided hosts list is empty, only localhost is available")
    
        self.options['inventory_manager'].subset(self.options.get('subset'))
        hosts = self.options['inventory_manager'].list_hosts(self.options['host_pattern'])
        if len(hosts) == 0 and not no_hosts:
            raise ansible.errors.AnsibleError("Specified hosts and/or --limit does not match any hosts")
    
        # Pass along cli options
        args = ['pytest-ansible']
        verbosity = None
        for verbosity_syntax in ('-v', '-vv', '-vvv', '-vvvv', '-vvvvv'):
            if verbosity_syntax in sys.argv:
                verbosity = verbosity_syntax
                break
        if verbosity is not None:
            args.append(verbosity_syntax)
        args.extend([self.options['host_pattern']])
        for argument in ('connection', 'user', 'become', 'become_method', 'become_user', 'module_path'):
            arg_value = self.options.get(argument)
            argument = argument.replace('_', '-')
    
            if arg_value in (None, False):
                continue
    
            if arg_value is True:
                args.append('--{0}'.format(argument))
            else:
                args.append('--{0}={1}'.format(argument, arg_value))
    
        # Use Ansible's own adhoc cli to parse the fake command line we created and then save it
        # into Ansible's global context
        adhoc = AdHocCLI(args)
        adhoc.parse()
    
        # And now we'll never speak of this again
        del adhoc
    
        # Initialize callback to capture module JSON responses
        cb = ResultAccumulator()
    
        kwargs = dict(
            inventory=self.options['inventory_manager'],
            variable_manager=self.options['variable_manager'],
            loader=self.options['loader'],
            stdout_callback=cb,
            passwords=dict(conn_pass=None, become_pass=None),
        )
    
        # create a pseudo-play to execute the specified module via a single task
        play_ds = dict(
            name="pytest-ansible",
            hosts=self.options['host_pattern'],
            become=self.options.get('become'),
            become_user=self.options.get('become_user'),
            gather_facts='no',
            tasks=[
                dict(
                    action=dict(
                        module=self.options['module_name'], args=complex_args
                    ),
                ),
            ]
        )
        play = Play().load(play_ds, variable_manager=self.options['variable_manager'], loader=self.options['loader'])
    
        # now create a task queue manager to execute the play
        tqm = None
        try:
            tqm = TaskQueueManager(**kwargs)
            tqm.run(play)
        finally:
            if tqm:
                tqm.cleanup()
    
    
        # Raise exception if host(s) unreachable
        # FIXME - if multiple hosts were involved, should an exception be raised?
        if cb.unreachable:
>           raise AnsibleConnectionFailure("Host unreachable", dark=cb.unreachable, contacted=cb.contacted)
E           pytest_ansible.errors.AnsibleConnectionFailure: Host unreachable

arg_value  = ['/data/sonic-mgmt/ansible/library']
args       = ['pytest-ansible', 'vlab-c-01', '--connection=smart', '--become', '--become-method=sudo', '--become-user=root', ...]
argument   = 'module-path'
cb         = <pytest_ansible.module_dispatcher.v28.ResultAccumulator object at 0x7f7455a784f0>
complex_args = {'_raw_params': 'show platform summary'}
hosts      = [vlab-c-01]
kwargs     = {'inventory': <ansible.inventory.manager.InventoryManager object at 0x7f745363d5e0>, 'loader': <ansible.parsing.datalo...ass': None}, 'stdout_callback': <pytest_ansible.module_dispatcher.v28.ResultAccumulator object at 0x7f7455a784f0>, ...}
module_args = ('show platform summary',)
no_hosts   = False
play       = pytest-ansible
play_ds    = {'become': True, 'become_user': 'root', 'gather_facts': 'no', 'hosts': 'vlab-c-01', ...}
self       = <pytest_ansible.module_dispatcher.v28.ModuleDispatcherV28 object at 0x7f745372aeb0>
tqm        = <ansible.executor.task_queue_manager.TaskQueueManager object at 0x7f74536b72e0>
verbosity  = None
verbosity_syntax = '-vvvvv'

/home/ubuntu/env-python3/lib/python3.8/site-packages/pytest_ansible/module_dispatcher/v28.py:159: AnsibleConnectionFailure

During handling of the above exception, another exception occurred:

enhance_inventory = None
ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
tbinfo = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}
request = <SubRequest 'duthosts' for <Function test_interface_on_each_node>>

    @pytest.fixture(name="duthosts", scope="session")
    def fixture_duthosts(enhance_inventory, ansible_adhoc, tbinfo, request):
        """
        @summary: fixture to get DUT hosts defined in testbed.
        @param ansible_adhoc: Fixture provided by the pytest-ansible package.
            Source of the various device objects. It is
            mandatory argument for the class constructors.
        @param tbinfo: fixture provides information about testbed.
        """
        try:
            host = DutHosts(ansible_adhoc, tbinfo, get_specified_duts(request))
            return host
        except BaseException as e:
            logger.error("Failed to initialize duthosts.")
            request.config.cache.set("duthosts_fixture_failed", True)
>           pt_assert(False, "!!!!!!!!!!!!!!!! duthosts fixture failed !!!!!!!!!!!!!!!!"
                      "Exception: {}".format(repr(e)))
E           Failed: !!!!!!!!!!!!!!!! duthosts fixture failed !!!!!!!!!!!!!!!!Exception: Host unreachable

ansible_adhoc = <function ansible_adhoc.<locals>.init_host_mgr at 0x7f74538b60d0>
enhance_inventory = None
request    = <SubRequest 'duthosts' for <Function test_interface_on_each_node>>
tbinfo     = {'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes', 'conf-name': 'vms-kvm-ciscovs-7nodes', 'duts': ['vlab-c-01'], ...}

conftest.py:368: Failed
=============================== warnings summary ===============================
common/plugins/loganalyzer/system_msg_handler.py:1
  /data/sonic-mgmt/tests/common/plugins/loganalyzer/system_msg_handler.py:1: DeprecationWarning: invalid escape sequence \ 
    '''

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
------------ generated xml file: /data/sonic-mgmt/tests/logs/tr.xml ------------
=========================== short test summary info ============================
SKIPPED [1] srv6/test_srv6_basic_sanity.py:500: This test is temporarily disabled due to configuration changes.
ERROR srv6/test_srv6_basic_sanity.py::test_interface_on_each_node - Failed: !...
ERROR srv6/test_srv6_basic_sanity.py::test_check_bgp_neighbors - Failed: !!!!...
ERROR srv6/test_srv6_basic_sanity.py::test_check_routes - Failed: !!!!!!!!!!!...
ERROR srv6/test_srv6_basic_sanity.py::test_traffic_check_via_trex - Failed: !...
ERROR srv6/test_srv6_basic_sanity.py::test_traffic_check_via_ptf - Failed: !!...
ERROR srv6/test_srv6_basic_sanity.py::test_traffic_check_local_link_fail_case
ERROR srv6/test_srv6_basic_sanity.py::test_traffic_check_remote_igp_fail_case
ERROR srv6/test_srv6_basic_sanity.py::test_traffic_check_remote_bgp_fail_case
=================== 1 skipped, 1 warning, 8 errors in 32.98s ===================
2025-12-04 17:09:48.917635 : Get input param file /root/workspace/PhoenixWingDailySRv6Test/jenkins_648/input_param_json.txt
2025-12-04 17:09:48.917842 : Get file lock for {'user_info': 'PhoenixWing_Daily_SRv6_Test_648'}
2025-12-04 17:09:49.919104 : Found index 1 for action read, user_info PhoenixWing_Daily_SRv6_Test_648
2025-12-04 17:09:49.919161 : Release file lock for {'user_info': 'PhoenixWing_Daily_SRv6_Test_648', 'action': 'read', 'output_vm': {'index': 3, 'user_info': 'PhoenixWing_Daily_SRv6_Test_648'}, 'output_index': 1, 'output_prefix': '192.168.0'}
2025-12-04 17:09:49.919215 : read_vm_reservation : {"user_info": "PhoenixWing_Daily_SRv6_Test_648", "action": "read", "output_vm": {"index": 3, "user_info": "PhoenixWing_Daily_SRv6_Test_648"}, "output_index": 1, "output_prefix": "192.168.0"}
2025-12-04 17:09:49.919392 : ifconfig | grep 30.57.186.111
2025-12-04 17:09:49.922358 : ifconfig | grep 30.57.186.42
2025-12-04 17:09:49.924870 : ifconfig | grep 30.57.186.79
2025-12-04 17:09:49.927178 : ifconfig | grep 30.57.186.80
2025-12-04 17:09:49.929620 : ifconfig | grep 30.57.186.218
2025-12-04 17:09:49.931843 : ifconfig | grep 30.57.186.175
2025-12-04 17:09:49.933981 : ifconfig | grep 11.166.8.106
2025-12-04 17:09:49.936205 : ifconfig | grep 11.166.8.104
2025-12-04 17:09:49.938360 : ifconfig | grep 11.165.122.19
2025-12-04 17:09:49.940634 : ifconfig | grep 11.166.1.213
2025-12-04 17:09:49.942874 : ifconfig | grep 11.165.121.210
2025-12-04 17:09:49.945196 : ifconfig | grep 11.165.120.75
2025-12-04 17:09:49.947746 : ifconfig | grep 11.165.121.106
2025-12-04 17:09:49.950502 : ifconfig | grep 11.166.8.96
2025-12-04 17:09:49.952776 : DEBUG_ARR:         inet 11.166.8.96  netmask 255.255.240.0  broadcast 11.166.15.255
2025-12-04 17:09:49.952810 : Found local server setting forr 11.166.8.96
2025-12-04 17:09:49.952821 : Set local ip as 192.168.0.3
{   'address': '11.166.8.96',
    'host_port': 'eth0',
    'jenkin_node_name': 'Pytest_ECS_96',
    'password': 'Alin00000s!',
    'user': 'root',
    'vm_bridge': 'vmbr0',
    'vm_gw': '192.168.0.1',
    'vmip': '192.168.0.2'}
2025-12-04 17:09:49.953026 : mkdir -p /tmp/local_cache//1764839388.9176273/
Run pytest on 11.166.8.96 vmip 192.168.0.3, vm name _192.168.0.3
Get input topo vms-kvm-ciscovs-7nodes
Get input test case  -c "srv6/test_srv6_basic_sanity.py" 
2025-12-04 17:09:49.955183 : ping 192.168.0.3 -c 2
2025-12-04 17:09:50.989248 : DEBUG_ARR: PING 192.168.0.3 (192.168.0.3) 56(84) bytes of data.
2025-12-04 17:09:50.989293 : DEBUG_ARR: 64 bytes from 192.168.0.3: icmp_seq=1 ttl=64 time=0.343 ms
2025-12-04 17:09:50.989300 : DEBUG_ARR: 64 bytes from 192.168.0.3: icmp_seq=2 ttl=64 time=0.138 ms
2025-12-04 17:09:50.989304 : DEBUG_ARR: 
2025-12-04 17:09:50.989308 : DEBUG_ARR: --- 192.168.0.3 ping statistics ---
2025-12-04 17:09:50.989311 : DEBUG_ARR: 2 packets transmitted, 2 received, 0% packet loss, time 1029ms
2025-12-04 17:09:50.989315 : DEBUG_ARR: rtt min/avg/max/mdev = 0.138/0.240/0.343/0.102 ms
2025-12-04 17:09:50.989334 : ping 192.168.0.3 -c 2
2025-12-04 17:09:52.013258 : DEBUG_ARR: PING 192.168.0.3 (192.168.0.3) 56(84) bytes of data.
2025-12-04 17:09:52.013303 : DEBUG_ARR: 64 bytes from 192.168.0.3: icmp_seq=1 ttl=64 time=0.111 ms
2025-12-04 17:09:52.013308 : DEBUG_ARR: 64 bytes from 192.168.0.3: icmp_seq=2 ttl=64 time=0.162 ms
2025-12-04 17:09:52.013312 : DEBUG_ARR: 
2025-12-04 17:09:52.013316 : DEBUG_ARR: --- 192.168.0.3 ping statistics ---
2025-12-04 17:09:52.013320 : DEBUG_ARR: 2 packets transmitted, 2 received, 0% packet loss, time 1021ms
2025-12-04 17:09:52.013324 : DEBUG_ARR: rtt min/avg/max/mdev = 0.111/0.136/0.162/0.025 ms
2025-12-04 17:09:52.013357 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "docker exec --user ubuntu sonic-mgmt-test bash -c 'ls'"
2025-12-04 17:09:54.695182 : Run sudo monit unmonitor container_checker for range(0, 1)
2025-12-04 17:09:54.695224 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.125 sudo monit unmonitor container_checker'"
2025-12-04 17:09:58.709287 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "sudo setcap cap_net_raw,cap_net_admin=eip /usr/sbin/tcpdump"
2025-12-04 17:10:00.468380 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "sudo chmod 777 /var/run/openvswitch/*"
2025-12-04 17:10:01.348198 : sshpass -p "123" scp   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" /root/workspace/PhoenixWingDailySRv6Test/jenkins_648/input_param_json.txt ubuntu@192.168.0.3:~/
2025-12-04 17:10:02.285809 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "docker exec --user ubuntu sonic-mgmt-test bash -c 'python3 -m venv ~/env-python3 ; source ~/env-python3/bin/activate;  pip install -i https://mirrors.aliyun.com/pypi/simple/  --upgrade \"paramiko>=3.5.1\";  cd /data/sonic-mgmt/tests; ./run_tests.sh -n vms-kvm-ciscovs-7nodes -d vlab-c-01  -c "srv6/test_srv6_basic_sanity.py"  -f vtestbed.yaml -i ../ansible/veos_vtb  -u  -e --skip_sanity -e --disable_loganalyzer -e --neighbor_type=sonic '"
2025-12-04 17:10:47.583992 : Run sudo ls -l  /etc/sonic/frr/* for range(0, 1)
2025-12-04 17:10:47.584034 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.125 sudo ls -l  /etc/sonic/frr/*'"
2025-12-04 17:10:53.140646 : Current directory /root/workspace/PhoenixWingDailySRv6Test/na_lab/pytest_env
{'conf-name': 'vms-kvm-t0', 'group-name': 'vms6-1', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-64', 'group-name': 'vms6-1', 'topo': 't0-64', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-64-32', 'group-name': 'vms6-1', 'topo': 't0-64-32', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t1-lag', 'group-name': 'vms6-2', 'topo': 't1-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-02', 'ptf_ip': '10.250.0.106/24', 'ptf_ipv6': 'fec0::ffff:afa:6/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-03'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-2', 'group-name': 'vms6-3', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-03', 'ptf_ip': '10.250.0.108/24', 'ptf_ipv6': 'fec0::ffff:afa:8/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-04'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dual-t0', 'group-name': 'vms6-4', 'topo': 'dualtor', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-04', 'ptf_ip': '10.250.0.109/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'server': 'server_1', 'vm_base': 'VM0108', 'dut': ['vlab-05', 'vlab-06'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Dual-TOR testbed'}
{'conf-name': 'vms-kvm-multi-asic-t1-lag', 'group-name': 'vms6-4', 'topo': 't1-64-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-05', 'ptf_ip': '10.250.0.110/24', 'ptf_ipv6': 'fec0::ffff:afa:a/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-07'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests multi-asic virtual switch vm'}
{'conf-name': 'vms-kvm-four-asic-t1-lag', 'group-name': 'vms6-4', 'topo': 't1-8-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-05', 'ptf_ip': '10.250.0.110/24', 'ptf_ipv6': 'fec0::ffff:afa:a/64', 'server': 'server_1', 'vm_base': 'VM0128', 'dut': ['vlab-08'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests multi-asic virtual switch vm'}
{'conf-name': 'vms-kvm-t2', 'group-name': 'vms6-4', 'topo': 't2-vs', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-04', 'ptf_ip': '10.250.0.109/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-t2-01', 'vlab-t2-02', 'vlab-t2-sup'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'T2 Virtual chassis'}
{'conf-name': 'vms-kvm-t0-3', 'group-name': 'vms6-6', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-06', 'ptf_ip': '10.250.0.116/24', 'ptf_ipv6': 'fec0::ffff:afb:2/64', 'server': 'server_1', 'vm_base': 'VM0132', 'dut': ['vlab-09'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-4', 'group-name': 'vms6-7', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-07', 'ptf_ip': '10.250.0.118/24', 'ptf_ipv6': 'fec0::ffff:afb:4/64', 'server': 'server_1', 'vm_base': 'VM0136', 'dut': ['vlab-10'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dual-mixed', 'group-name': 'vms6-8', 'topo': 'dualtor-mixed', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-08', 'ptf_ip': '10.250.0.119/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'netns_mgmt_ip': '10.250.0.126/24', 'server': 'server_1', 'vm_base': 'VM0140', 'dut': ['vlab-11', 'vlab-12'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Dual-TOR-Mixed testbed'}
{'conf-name': '8000e-t0', 'group-name': 'vms8-1', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-8k-01', 'ptf_ip': '10.250.0.202/24', 'ptf_ipv6': 'fec0::ffff:afc:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-8k-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests 8000e sonic device'}
{'conf-name': '8000e-t1', 'group-name': 'vms8-1', 'topo': 't1', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-8k-01', 'ptf_ip': '10.250.0.202/24', 'ptf_ipv6': 'fec0::ffff:afc:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-8k-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests 8000e sonic device'}
{'conf-name': 'vms-kvm-wan-pub', 'group-name': 'vms6-1', 'topo': 'wan-pub', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-4link', 'group-name': 'vms6-1', 'topo': 'wan-4link', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-pub-cisco', 'group-name': 'vms6-1', 'topo': 'wan-pub-cisco', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-2dut', 'group-name': 'vms6-1', 'topo': 'wan-2dut', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01', 'vlab-03'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-3link-tg', 'group-name': 'vms6-1', 'topo': 'wan-3link-tg', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-ecmp', 'group-name': 'vms6-1', 'topo': 'wan-ecmp', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01', 'vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-pub-isis', 'group-name': 'vms6-1', 'topo': 'wan-pub-isis', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dpu', 'group-name': 'vms6-1', 'topo': 'dpu', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': False, 'comment': 'Tests virtual switch vm as DPU'}
{'conf-name': 'vms-kvm-ciscovs-7nodes', 'group-name': 'vms9-1', 'topo': 'ciscovs-7nodes', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-c-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes'}
{'conf-name': 'vms-kvm-ciscovs-5nodes', 'group-name': 'vms9-1', 'topo': 'ciscovs-5nodes', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-c-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 5 nodes'}
2025-12-04 17:10:53.206573 : /root/workspace/PhoenixWingDailySRv6Test/na_lab/pytest_env/../..//sonic-mgmt/ansible/vars/topo_ciscovs-7nodes.yml : vms_yml {'PE1': {'vlans': [28], 'vm_offset': 0}, 'PE2': {'vlans': [29], 'vm_offset': 1}, 'PE3': {'vlans': [30], 'vm_offset': 2}, 'P3': {'vlans': [31], 'vm_offset': 3}, 'P2': {'vlans': [16], 'vm_offset': 4}, 'P4': {'vlans': [17], 'vm_offset': 5}}, len 6
2025-12-04 17:10:53.206654 : Run sudo ls -l  /etc/sonic/frr/* for range(0, 6)
2025-12-04 17:10:53.206666 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.51 sudo ls -l  /etc/sonic/frr/*'"
2025-12-04 17:10:55.676883 : Run uptime for range(0, 1)
2025-12-04 17:10:55.676922 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.125 uptime'"
2025-12-04 17:10:59.635868 : Current directory /root/workspace/PhoenixWingDailySRv6Test/na_lab/pytest_env
{'conf-name': 'vms-kvm-t0', 'group-name': 'vms6-1', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-64', 'group-name': 'vms6-1', 'topo': 't0-64', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-64-32', 'group-name': 'vms6-1', 'topo': 't0-64-32', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t1-lag', 'group-name': 'vms6-2', 'topo': 't1-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-02', 'ptf_ip': '10.250.0.106/24', 'ptf_ipv6': 'fec0::ffff:afa:6/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-03'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-2', 'group-name': 'vms6-3', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-03', 'ptf_ip': '10.250.0.108/24', 'ptf_ipv6': 'fec0::ffff:afa:8/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-04'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dual-t0', 'group-name': 'vms6-4', 'topo': 'dualtor', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-04', 'ptf_ip': '10.250.0.109/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'server': 'server_1', 'vm_base': 'VM0108', 'dut': ['vlab-05', 'vlab-06'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Dual-TOR testbed'}
{'conf-name': 'vms-kvm-multi-asic-t1-lag', 'group-name': 'vms6-4', 'topo': 't1-64-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-05', 'ptf_ip': '10.250.0.110/24', 'ptf_ipv6': 'fec0::ffff:afa:a/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-07'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests multi-asic virtual switch vm'}
{'conf-name': 'vms-kvm-four-asic-t1-lag', 'group-name': 'vms6-4', 'topo': 't1-8-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-05', 'ptf_ip': '10.250.0.110/24', 'ptf_ipv6': 'fec0::ffff:afa:a/64', 'server': 'server_1', 'vm_base': 'VM0128', 'dut': ['vlab-08'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests multi-asic virtual switch vm'}
{'conf-name': 'vms-kvm-t2', 'group-name': 'vms6-4', 'topo': 't2-vs', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-04', 'ptf_ip': '10.250.0.109/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-t2-01', 'vlab-t2-02', 'vlab-t2-sup'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'T2 Virtual chassis'}
{'conf-name': 'vms-kvm-t0-3', 'group-name': 'vms6-6', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-06', 'ptf_ip': '10.250.0.116/24', 'ptf_ipv6': 'fec0::ffff:afb:2/64', 'server': 'server_1', 'vm_base': 'VM0132', 'dut': ['vlab-09'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-4', 'group-name': 'vms6-7', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-07', 'ptf_ip': '10.250.0.118/24', 'ptf_ipv6': 'fec0::ffff:afb:4/64', 'server': 'server_1', 'vm_base': 'VM0136', 'dut': ['vlab-10'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dual-mixed', 'group-name': 'vms6-8', 'topo': 'dualtor-mixed', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-08', 'ptf_ip': '10.250.0.119/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'netns_mgmt_ip': '10.250.0.126/24', 'server': 'server_1', 'vm_base': 'VM0140', 'dut': ['vlab-11', 'vlab-12'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Dual-TOR-Mixed testbed'}
{'conf-name': '8000e-t0', 'group-name': 'vms8-1', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-8k-01', 'ptf_ip': '10.250.0.202/24', 'ptf_ipv6': 'fec0::ffff:afc:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-8k-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests 8000e sonic device'}
{'conf-name': '8000e-t1', 'group-name': 'vms8-1', 'topo': 't1', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-8k-01', 'ptf_ip': '10.250.0.202/24', 'ptf_ipv6': 'fec0::ffff:afc:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-8k-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests 8000e sonic device'}
{'conf-name': 'vms-kvm-wan-pub', 'group-name': 'vms6-1', 'topo': 'wan-pub', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-4link', 'group-name': 'vms6-1', 'topo': 'wan-4link', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-pub-cisco', 'group-name': 'vms6-1', 'topo': 'wan-pub-cisco', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-2dut', 'group-name': 'vms6-1', 'topo': 'wan-2dut', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01', 'vlab-03'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-3link-tg', 'group-name': 'vms6-1', 'topo': 'wan-3link-tg', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-ecmp', 'group-name': 'vms6-1', 'topo': 'wan-ecmp', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01', 'vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-pub-isis', 'group-name': 'vms6-1', 'topo': 'wan-pub-isis', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dpu', 'group-name': 'vms6-1', 'topo': 'dpu', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': False, 'comment': 'Tests virtual switch vm as DPU'}
{'conf-name': 'vms-kvm-ciscovs-7nodes', 'group-name': 'vms9-1', 'topo': 'ciscovs-7nodes', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-c-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes'}
{'conf-name': 'vms-kvm-ciscovs-5nodes', 'group-name': 'vms9-1', 'topo': 'ciscovs-5nodes', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-c-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 5 nodes'}
 09:10:59 up 33 min,  0 user,  load average: 18.47, 21.42, 21.18
2025-12-04 17:10:59.699318 : /root/workspace/PhoenixWingDailySRv6Test/na_lab/pytest_env/../..//sonic-mgmt/ansible/vars/topo_ciscovs-7nodes.yml : vms_yml {'PE1': {'vlans': [28], 'vm_offset': 0}, 'PE2': {'vlans': [29], 'vm_offset': 1}, 'PE3': {'vlans': [30], 'vm_offset': 2}, 'P3': {'vlans': [31], 'vm_offset': 3}, 'P2': {'vlans': [16], 'vm_offset': 4}, 'P4': {'vlans': [17], 'vm_offset': 5}}, len 6
2025-12-04 17:10:59.699430 : Run uptime for range(0, 6)
2025-12-04 17:10:59.699444 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.51 uptime'"
2025-12-04 17:11:00.881549 : Run docker ps for range(0, 1)
2025-12-04 17:11:00.881588 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.125 docker ps'"
2025-12-04 17:11:04.917943 : Current directory /root/workspace/PhoenixWingDailySRv6Test/na_lab/pytest_env
{'conf-name': 'vms-kvm-t0', 'group-name': 'vms6-1', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-64', 'group-name': 'vms6-1', 'topo': 't0-64', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-64-32', 'group-name': 'vms6-1', 'topo': 't0-64-32', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t1-lag', 'group-name': 'vms6-2', 'topo': 't1-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-02', 'ptf_ip': '10.250.0.106/24', 'ptf_ipv6': 'fec0::ffff:afa:6/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-03'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-2', 'group-name': 'vms6-3', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-03', 'ptf_ip': '10.250.0.108/24', 'ptf_ipv6': 'fec0::ffff:afa:8/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-04'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dual-t0', 'group-name': 'vms6-4', 'topo': 'dualtor', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-04', 'ptf_ip': '10.250.0.109/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'server': 'server_1', 'vm_base': 'VM0108', 'dut': ['vlab-05', 'vlab-06'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Dual-TOR testbed'}
{'conf-name': 'vms-kvm-multi-asic-t1-lag', 'group-name': 'vms6-4', 'topo': 't1-64-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-05', 'ptf_ip': '10.250.0.110/24', 'ptf_ipv6': 'fec0::ffff:afa:a/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-07'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests multi-asic virtual switch vm'}
{'conf-name': 'vms-kvm-four-asic-t1-lag', 'group-name': 'vms6-4', 'topo': 't1-8-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-05', 'ptf_ip': '10.250.0.110/24', 'ptf_ipv6': 'fec0::ffff:afa:a/64', 'server': 'server_1', 'vm_base': 'VM0128', 'dut': ['vlab-08'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests multi-asic virtual switch vm'}
{'conf-name': 'vms-kvm-t2', 'group-name': 'vms6-4', 'topo': 't2-vs', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-04', 'ptf_ip': '10.250.0.109/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-t2-01', 'vlab-t2-02', 'vlab-t2-sup'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'T2 Virtual chassis'}
{'conf-name': 'vms-kvm-t0-3', 'group-name': 'vms6-6', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-06', 'ptf_ip': '10.250.0.116/24', 'ptf_ipv6': 'fec0::ffff:afb:2/64', 'server': 'server_1', 'vm_base': 'VM0132', 'dut': ['vlab-09'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-4', 'group-name': 'vms6-7', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-07', 'ptf_ip': '10.250.0.118/24', 'ptf_ipv6': 'fec0::ffff:afb:4/64', 'server': 'server_1', 'vm_base': 'VM0136', 'dut': ['vlab-10'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dual-mixed', 'group-name': 'vms6-8', 'topo': 'dualtor-mixed', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-08', 'ptf_ip': '10.250.0.119/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'netns_mgmt_ip': '10.250.0.126/24', 'server': 'server_1', 'vm_base': 'VM0140', 'dut': ['vlab-11', 'vlab-12'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Dual-TOR-Mixed testbed'}
{'conf-name': '8000e-t0', 'group-name': 'vms8-1', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-8k-01', 'ptf_ip': '10.250.0.202/24', 'ptf_ipv6': 'fec0::ffff:afc:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-8k-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests 8000e sonic device'}
{'conf-name': '8000e-t1', 'group-name': 'vms8-1', 'topo': 't1', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-8k-01', 'ptf_ip': '10.250.0.202/24', 'ptf_ipv6': 'fec0::ffff:afc:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-8k-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests 8000e sonic device'}
{'conf-name': 'vms-kvm-wan-pub', 'group-name': 'vms6-1', 'topo': 'wan-pub', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-4link', 'group-name': 'vms6-1', 'topo': 'wan-4link', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-pub-cisco', 'group-name': 'vms6-1', 'topo': 'wan-pub-cisco', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-2dut', 'group-name': 'vms6-1', 'topo': 'wan-2dut', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01', 'vlab-03'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-3link-tg', 'group-name': 'vms6-1', 'topo': 'wan-3link-tg', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-ecmp', 'group-name': 'vms6-1', 'topo': 'wan-ecmp', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01', 'vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
CONTAINER ID   IMAGE                                COMMAND                  CREATED          STATUS          PORTS     NAMES
d52b9f11e701   docker-snmp:latest                   "/usr/bin/docker-snm…"   27 minutes ago   Up 12 minutes             snmp
3155359ea5c6   docker-platform-monitor:latest       "/usr/bin/docker_ini…"   27 minutes ago   Up 12 minutes             pmon
adee707f0ce2   docker-sonic-mgmt-framework:latest   "/usr/local/bin/supe…"   27 minutes ago   Up 12 minutes             mgmt-framework
c36e5f144ba8   docker-lldp:latest                   "/usr/bin/docker-lld…"   28 minutes ago   Up 12 minutes             lldp
4ef0380b6699   docker-sonic-gnmi:latest             "/usr/local/bin/supe…"   28 minutes ago   Up 12 minutes             gnmi
40865859d3f1   docker-router-advertiser:latest      "/usr/bin/docker-ini…"   31 minutes ago   Up 15 minutes             radv
4c40382f86f7   docker-teamd:latest                  "/usr/local/bin/supe…"   32 minutes ago   Up 15 minutes             teamd
f6d62ba924eb   docker-syncd-ciscovs:latest          "/usr/bin/docker_ini…"   32 minutes ago   Up 15 minutes             syncd
298cf06e56c2   docker-fpm-frr:latest                "/usr/bin/docker_ini…"   32 minutes ago   Up 15 minutes             bgp
53e34b289e8c   docker-sysmgr:latest                 "/usr/local/bin/supe…"   32 minutes ago   Up 15 minutes             sysmgr
ea70690cd984   docker-eventd:latest                 "/usr/local/bin/supe…"   32 minutes ago   Up 15 minutes             eventd
193392143d47   docker-orchagent:latest              "/usr/bin/docker-ini…"   32 minutes ago   Up 15 minutes             swss
ad94e974a1fa   docker-database:latest               "/usr/local/bin/dock…"   32 minutes ago   Up 32 minutes             database
{'conf-name': 'vms-kvm-wan-pub-isis', 'group-name': 'vms6-1', 'topo': 'wan-pub-isis', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dpu', 'group-name': 'vms6-1', 'topo': 'dpu', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': False, 'comment': 'Tests virtual switch vm as DPU'}
{'conf-name': 'vms-kvm-ciscovs-7nodes', 'group-name': 'vms9-1', 'topo': 'ciscovs-7nodes', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-c-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes'}
{'conf-name': 'vms-kvm-ciscovs-5nodes', 'group-name': 'vms9-1', 'topo': 'ciscovs-5nodes', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-c-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 5 nodes'}
2025-12-04 17:11:04.981690 : /root/workspace/PhoenixWingDailySRv6Test/na_lab/pytest_env/../..//sonic-mgmt/ansible/vars/topo_ciscovs-7nodes.yml : vms_yml {'PE1': {'vlans': [28], 'vm_offset': 0}, 'PE2': {'vlans': [29], 'vm_offset': 1}, 'PE3': {'vlans': [30], 'vm_offset': 2}, 'P3': {'vlans': [31], 'vm_offset': 3}, 'P2': {'vlans': [16], 'vm_offset': 4}, 'P4': {'vlans': [17], 'vm_offset': 5}}, len 6
2025-12-04 17:11:04.981774 : Run docker ps for range(0, 6)
2025-12-04 17:11:04.981787 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.51 docker ps'"
2025-12-04 17:11:06.913107 : Run show version for range(0, 1)
2025-12-04 17:11:06.913145 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.125 show version'"
2025-12-04 17:11:10.997245 : Current directory /root/workspace/PhoenixWingDailySRv6Test/na_lab/pytest_env
{'conf-name': 'vms-kvm-t0', 'group-name': 'vms6-1', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-64', 'group-name': 'vms6-1', 'topo': 't0-64', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-64-32', 'group-name': 'vms6-1', 'topo': 't0-64-32', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t1-lag', 'group-name': 'vms6-2', 'topo': 't1-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-02', 'ptf_ip': '10.250.0.106/24', 'ptf_ipv6': 'fec0::ffff:afa:6/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-03'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-2', 'group-name': 'vms6-3', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-03', 'ptf_ip': '10.250.0.108/24', 'ptf_ipv6': 'fec0::ffff:afa:8/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-04'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dual-t0', 'group-name': 'vms6-4', 'topo': 'dualtor', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-04', 'ptf_ip': '10.250.0.109/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'server': 'server_1', 'vm_base': 'VM0108', 'dut': ['vlab-05', 'vlab-06'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Dual-TOR testbed'}
{'conf-name': 'vms-kvm-multi-asic-t1-lag', 'group-name': 'vms6-4', 'topo': 't1-64-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-05', 'ptf_ip': '10.250.0.110/24', 'ptf_ipv6': 'fec0::ffff:afa:a/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-07'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests multi-asic virtual switch vm'}
{'conf-name': 'vms-kvm-four-asic-t1-lag', 'group-name': 'vms6-4', 'topo': 't1-8-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-05', 'ptf_ip': '10.250.0.110/24', 'ptf_ipv6': 'fec0::ffff:afa:a/64', 'server': 'server_1', 'vm_base': 'VM0128', 'dut': ['vlab-08'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests multi-asic virtual switch vm'}
{'conf-name': 'vms-kvm-t2', 'group-name': 'vms6-4', 'topo': 't2-vs', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-04', 'ptf_ip': '10.250.0.109/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-t2-01', 'vlab-t2-02', 'vlab-t2-sup'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'T2 Virtual chassis'}
{'conf-name': 'vms-kvm-t0-3', 'group-name': 'vms6-6', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-06', 'ptf_ip': '10.250.0.116/24', 'ptf_ipv6': 'fec0::ffff:afb:2/64', 'server': 'server_1', 'vm_base': 'VM0132', 'dut': ['vlab-09'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-4', 'group-name': 'vms6-7', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-07', 'ptf_ip': '10.250.0.118/24', 'ptf_ipv6': 'fec0::ffff:afb:4/64', 'server': 'server_1', 'vm_base': 'VM0136', 'dut': ['vlab-10'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dual-mixed', 'group-name': 'vms6-8', 'topo': 'dualtor-mixed', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-08', 'ptf_ip': '10.250.0.119/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'netns_mgmt_ip': '10.250.0.126/24', 'server': 'server_1', 'vm_base': 'VM0140', 'dut': ['vlab-11', 'vlab-12'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Dual-TOR-Mixed testbed'}
{'conf-name': '8000e-t0', 'group-name': 'vms8-1', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-8k-01', 'ptf_ip': '10.250.0.202/24', 'ptf_ipv6': 'fec0::ffff:afc:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-8k-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests 8000e sonic device'}
{'conf-name': '8000e-t1', 'group-name': 'vms8-1', 'topo': 't1', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-8k-01', 'ptf_ip': '10.250.0.202/24', 'ptf_ipv6': 'fec0::ffff:afc:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-8k-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests 8000e sonic device'}
{'conf-name': 'vms-kvm-wan-pub', 'group-name': 'vms6-1', 'topo': 'wan-pub', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-4link', 'group-name': 'vms6-1', 'topo': 'wan-4link', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}

SONiC Software Version: SONiC.phoenixwing_08192025.384-dirty-20251202.084825
SONiC OS Version: 12
Distribution: Debian 12.12
Kernel: 6.1.0-29-2-amd64
Build commit: 885eae54f
Build date: Tue Dec  2 09:58:10 UTC 2025
Built by: joy@joy

Platform: x86_64-kvm_x86_64-r0
HwSKU: cisco-8101-p4-32x100-vs
ASIC: cisco-ngdp-vs
ASIC Count: 1
Serial Number: N/A
Model Number: N/A
Hardware Revision: N/A
Uptime: 09:11:11 up 33 min,  0 user,  load average: 21.36, 21.93, 21.35
Date: Thu 04 Dec 2025 09:11:11

Docker images:
REPOSITORY                    TAG                                              IMAGE ID       SIZE
docker-macsec                 latest                                           8922bfb3fb75   319MB
docker-macsec                 phoenixwing_08192025.384-dirty-20251202.084825   8922bfb3fb75   319MB
docker-dhcp-relay             latest                                           aaba8d83c448   295MB
docker-dhcp-relay             phoenixwing_08192025.384-dirty-20251202.084825   aaba8d83c448   295MB
docker-teamd                  latest                                           f49c66de6ccd   316MB
docker-teamd                  phoenixwing_08192025.384-dirty-20251202.084825   f49c66de6ccd   316MB
docker-sysmgr                 latest                                           53cd1475b2a7   298MB
docker-sysmgr                 phoenixwing_08192025.384-dirty-20251202.084825   53cd1475b2a7   298MB
docker-sonic-mgmt-framework   latest                                           b03fc415056f   380MB
docker-sonic-mgmt-framework   phoenixwing_08192025.384-dirty-20251202.084825   b03fc415056f   380MB
docker-snmp                   latest                                           e1fc1d78905a   311MB
docker-snmp                   phoenixwing_08192025.384-dirty-20251202.084825   e1fc1d78905a   311MB
docker-sflow                  latest                                           91b4ebdeb025   317MB
docker-sflow                  phoenixwing_08192025.384-dirty-20251202.084825   91b4ebdeb025   317MB
docker-router-advertiser      latest                                           55150179cbf5   286MB
docker-router-advertiser      phoenixwing_08192025.384-dirty-20251202.084825   55150179cbf5   286MB
docker-platform-monitor       latest                                           7a3be2d81f94   420MB
docker-platform-monitor       phoenixwing_08192025.384-dirty-20251202.084825   7a3be2d81f94   420MB
docker-orchagent              latest                                           c5d3081188ab   328MB
docker-orchagent              phoenixwing_08192025.384-dirty-20251202.084825   c5d3081188ab   328MB
docker-nat                    latest                                           6ceca7bbccd4   319MB
docker-nat                    phoenixwing_08192025.384-dirty-20251202.084825   6ceca7bbccd4   319MB
docker-mux                    latest                                           cb5b1097b140   338MB
docker-mux                    phoenixwing_08192025.384-dirty-20251202.084825   cb5b1097b140   338MB
docker-lldp                   latest                                           521912c35d16   332MB
docker-lldp                   phoenixwing_08192025.384-dirty-20251202.084825   521912c35d16   332MB
docker-sonic-gnmi             latest                                           77e6bc5ba7aa   402MB
docker-sonic-gnmi             phoenixwing_08192025.384-dirty-20251202.084825   77e6bc5ba7aa   402MB
docker-gnmi-watchdog          latest                                           3299c6e0cbfb   294MB
docker-gnmi-watchdog          phoenixwing_08192025.384-dirty-20251202.084825   3299c6e0cbfb   294MB
docker-fpm-frr                latest                                           4475594c8d9f   365MB
docker-fpm-frr                phoenixwing_08192025.384-dirty-20251202.084825   4475594c8d9f   365MB
docker-eventd                 latest                                           84d8e51b7698   286MB
docker-eventd                 phoenixwing_08192025.384-dirty-20251202.084825   84d8e51b7698   286MB
docker-database               latest                                           19ac949fd780   299MB
docker-database               phoenixwing_08192025.384-dirty-20251202.084825   19ac949fd780   299MB
docker-sonic-bmp              latest                                           73c8a3d341e0   288MB
docker-sonic-bmp              phoenixwing_08192025.384-dirty-20251202.084825   73c8a3d341e0   288MB
docker-bmp-watchdog           latest                                           5f2a374311a2   286MB
docker-bmp-watchdog           phoenixwing_08192025.384-dirty-20251202.084825   5f2a374311a2   286MB
docker-auditd                 latest                                           c0c4dc50ec7e   286MB
docker-auditd                 phoenixwing_08192025.384-dirty-20251202.084825   c0c4dc50ec7e   286MB
docker-auditd-watchdog        latest                                           a2f36a025290   289MB
docker-auditd-watchdog        phoenixwing_08192025.384-dirty-20251202.084825   a2f36a025290   289MB
docker-syncd-ciscovs          latest                                           8df786aecda9   1.26GB
docker-syncd-ciscovs          phoenixwing_08192025.384-dirty-20251202.084825   8df786aecda9   1.26GB

{'conf-name': 'vms-kvm-wan-pub-cisco', 'group-name': 'vms6-1', 'topo': 'wan-pub-cisco', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-2dut', 'group-name': 'vms6-1', 'topo': 'wan-2dut', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01', 'vlab-03'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-3link-tg', 'group-name': 'vms6-1', 'topo': 'wan-3link-tg', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-ecmp', 'group-name': 'vms6-1', 'topo': 'wan-ecmp', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01', 'vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-pub-isis', 'group-name': 'vms6-1', 'topo': 'wan-pub-isis', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dpu', 'group-name': 'vms6-1', 'topo': 'dpu', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': False, 'comment': 'Tests virtual switch vm as DPU'}
{'conf-name': 'vms-kvm-ciscovs-7nodes', 'group-name': 'vms9-1', 'topo': 'ciscovs-7nodes', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-c-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes'}
{'conf-name': 'vms-kvm-ciscovs-5nodes', 'group-name': 'vms9-1', 'topo': 'ciscovs-5nodes', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-c-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 5 nodes'}
2025-12-04 17:11:11.061987 : /root/workspace/PhoenixWingDailySRv6Test/na_lab/pytest_env/../..//sonic-mgmt/ansible/vars/topo_ciscovs-7nodes.yml : vms_yml {'PE1': {'vlans': [28], 'vm_offset': 0}, 'PE2': {'vlans': [29], 'vm_offset': 1}, 'PE3': {'vlans': [30], 'vm_offset': 2}, 'P3': {'vlans': [31], 'vm_offset': 3}, 'P2': {'vlans': [16], 'vm_offset': 4}, 'P4': {'vlans': [17], 'vm_offset': 5}}, len 6
2025-12-04 17:11:11.062071 : Run show version for range(0, 6)
2025-12-04 17:11:11.062086 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.51 show version'"
2025-12-04 17:11:12.954669 : Run show interface status for range(0, 1)
2025-12-04 17:11:12.954707 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.125 show interface status'"
2025-12-04 17:11:16.915887 : Current directory /root/workspace/PhoenixWingDailySRv6Test/na_lab/pytest_env
{'conf-name': 'vms-kvm-t0', 'group-name': 'vms6-1', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-64', 'group-name': 'vms6-1', 'topo': 't0-64', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-64-32', 'group-name': 'vms6-1', 'topo': 't0-64-32', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t1-lag', 'group-name': 'vms6-2', 'topo': 't1-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-02', 'ptf_ip': '10.250.0.106/24', 'ptf_ipv6': 'fec0::ffff:afa:6/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-03'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-2', 'group-name': 'vms6-3', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-03', 'ptf_ip': '10.250.0.108/24', 'ptf_ipv6': 'fec0::ffff:afa:8/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-04'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dual-t0', 'group-name': 'vms6-4', 'topo': 'dualtor', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-04', 'ptf_ip': '10.250.0.109/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'server': 'server_1', 'vm_base': 'VM0108', 'dut': ['vlab-05', 'vlab-06'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Dual-TOR testbed'}
{'conf-name': 'vms-kvm-multi-asic-t1-lag', 'group-name': 'vms6-4', 'topo': 't1-64-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-05', 'ptf_ip': '10.250.0.110/24', 'ptf_ipv6': 'fec0::ffff:afa:a/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-07'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests multi-asic virtual switch vm'}
{'conf-name': 'vms-kvm-four-asic-t1-lag', 'group-name': 'vms6-4', 'topo': 't1-8-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-05', 'ptf_ip': '10.250.0.110/24', 'ptf_ipv6': 'fec0::ffff:afa:a/64', 'server': 'server_1', 'vm_base': 'VM0128', 'dut': ['vlab-08'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests multi-asic virtual switch vm'}
{'conf-name': 'vms-kvm-t2', 'group-name': 'vms6-4', 'topo': 't2-vs', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-04', 'ptf_ip': '10.250.0.109/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-t2-01', 'vlab-t2-02', 'vlab-t2-sup'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'T2 Virtual chassis'}
{'conf-name': 'vms-kvm-t0-3', 'group-name': 'vms6-6', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-06', 'ptf_ip': '10.250.0.116/24', 'ptf_ipv6': 'fec0::ffff:afb:2/64', 'server': 'server_1', 'vm_base': 'VM0132', 'dut': ['vlab-09'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-4', 'group-name': 'vms6-7', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-07', 'ptf_ip': '10.250.0.118/24', 'ptf_ipv6': 'fec0::ffff:afb:4/64', 'server': 'server_1', 'vm_base': 'VM0136', 'dut': ['vlab-10'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
  Interface                Lanes    Speed    MTU    FEC        Alias    Vlan    Oper    Admin    Type    Asym PFC
-----------  -------------------  -------  -----  -----  -----------  ------  ------  -------  ------  ----------
  Ethernet0  2304,2305,2306,2307     100G   9100    N/A    Ethernet0  routed      up       up     N/A         N/A
  Ethernet4  2320,2321,2322,2323     100G   9100    N/A    Ethernet4  routed      up       up     N/A         N/A
  Ethernet8  2312,2313,2314,2315     100G   9100    N/A    Ethernet8  routed      up       up     N/A         N/A
 Ethernet12  2056,2057,2058,2059     100G   9100    N/A   Ethernet12  routed      up       up     N/A         N/A
 Ethernet16  1792,1793,1794,1795     100G   9100    N/A   Ethernet16  routed      up       up     N/A         N/A
 Ethernet20  2048,2049,2050,2051     100G   9100    N/A   Ethernet20  routed      up       up     N/A         N/A
 Ethernet24  2560,2561,2562,2563     100G   9100    N/A   Ethernet24  routed      up       up     N/A         N/A
 Ethernet28  2824,2825,2826,2827     100G   9100    N/A   Ethernet28  routed      up       up     N/A         N/A
 Ethernet32  2832,2833,2834,2835     100G   9100    N/A   Ethernet32  routed      up       up     N/A         N/A
 Ethernet36  2816,2817,2818,2819     100G   9100    N/A   Ethernet36  routed      up       up     N/A         N/A
 Ethernet40  2568,2569,2570,2571     100G   9100    N/A   Ethernet40  routed      up       up     N/A         N/A
 Ethernet44  2576,2577,2578,2579     100G   9100    N/A   Ethernet44  routed      up       up     N/A         N/A
 Ethernet48  1536,1537,1538,1539     100G   9100    N/A   Ethernet48  routed      up       up     N/A         N/A
 Ethernet52  1800,1801,1802,1803     100G   9100    N/A   Ethernet52  routed      up       up     N/A         N/A
 Ethernet56  1552,1553,1554,1555     100G   9100    N/A   Ethernet56  routed      up       up     N/A         N/A
 Ethernet60  1544,1545,1546,1547     100G   9100    N/A   Ethernet60  routed      up       up     N/A         N/A
 Ethernet64  1296,1297,1298,1299     100G   9100    N/A   Ethernet64  routed      up       up     N/A         N/A
 Ethernet68  1288,1289,1290,1291     100G   9100    N/A   Ethernet68  routed      up       up     N/A         N/A
 Ethernet72  1280,1281,1282,1283     100G   9100    N/A   Ethernet72  routed      up       up     N/A         N/A
 Ethernet76  1032,1033,1034,1035     100G   9100    N/A   Ethernet76  routed      up       up     N/A         N/A
 Ethernet80      264,265,266,267     100G   9100    N/A   Ethernet80  routed      up       up     N/A         N/A
 Ethernet84      272,273,274,275     100G   9100    N/A   Ethernet84  routed      up       up     N/A         N/A
 Ethernet88          16,17,18,19     100G   9100    N/A   Ethernet88  routed      up       up     N/A         N/A
 Ethernet92              0,1,2,3     100G   9100    N/A   Ethernet92  routed      up       up     N/A         N/A
 Ethernet96      256,257,258,259     100G   9100    N/A   Ethernet96  routed      up       up     N/A         N/A
Ethernet100            8,9,10,11     100G   9100    N/A  Ethernet100  routed      up       up     N/A         N/A
Ethernet104  1024,1025,1026,1027     100G   9100    N/A  Ethernet104  routed      up       up     N/A         N/A
Ethernet108      768,769,770,771     100G   9100    N/A  Ethernet108  routed      up       up     N/A         N/A
Ethernet112      524,525,526,527     100G   9100    N/A  Ethernet112  routed      up       up     N/A         N/A
Ethernet116      776,777,778,779     100G   9100    N/A  Ethernet116  routed      up       up     N/A         N/A
Ethernet120      516,517,518,519     100G   9100    N/A  Ethernet120  routed      up       up     N/A         N/A
Ethernet124      528,529,530,531     100G   9100    N/A  Ethernet124  routed      up       up     N/A         N/A
{'conf-name': 'vms-kvm-dual-mixed', 'group-name': 'vms6-8', 'topo': 'dualtor-mixed', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-08', 'ptf_ip': '10.250.0.119/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'netns_mgmt_ip': '10.250.0.126/24', 'server': 'server_1', 'vm_base': 'VM0140', 'dut': ['vlab-11', 'vlab-12'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Dual-TOR-Mixed testbed'}
{'conf-name': '8000e-t0', 'group-name': 'vms8-1', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-8k-01', 'ptf_ip': '10.250.0.202/24', 'ptf_ipv6': 'fec0::ffff:afc:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-8k-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests 8000e sonic device'}
{'conf-name': '8000e-t1', 'group-name': 'vms8-1', 'topo': 't1', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-8k-01', 'ptf_ip': '10.250.0.202/24', 'ptf_ipv6': 'fec0::ffff:afc:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-8k-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests 8000e sonic device'}
{'conf-name': 'vms-kvm-wan-pub', 'group-name': 'vms6-1', 'topo': 'wan-pub', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-4link', 'group-name': 'vms6-1', 'topo': 'wan-4link', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-pub-cisco', 'group-name': 'vms6-1', 'topo': 'wan-pub-cisco', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-2dut', 'group-name': 'vms6-1', 'topo': 'wan-2dut', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01', 'vlab-03'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-3link-tg', 'group-name': 'vms6-1', 'topo': 'wan-3link-tg', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-ecmp', 'group-name': 'vms6-1', 'topo': 'wan-ecmp', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01', 'vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-pub-isis', 'group-name': 'vms6-1', 'topo': 'wan-pub-isis', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dpu', 'group-name': 'vms6-1', 'topo': 'dpu', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': False, 'comment': 'Tests virtual switch vm as DPU'}
{'conf-name': 'vms-kvm-ciscovs-7nodes', 'group-name': 'vms9-1', 'topo': 'ciscovs-7nodes', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-c-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes'}
{'conf-name': 'vms-kvm-ciscovs-5nodes', 'group-name': 'vms9-1', 'topo': 'ciscovs-5nodes', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-c-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 5 nodes'}
2025-12-04 17:11:16.980400 : /root/workspace/PhoenixWingDailySRv6Test/na_lab/pytest_env/../..//sonic-mgmt/ansible/vars/topo_ciscovs-7nodes.yml : vms_yml {'PE1': {'vlans': [28], 'vm_offset': 0}, 'PE2': {'vlans': [29], 'vm_offset': 1}, 'PE3': {'vlans': [30], 'vm_offset': 2}, 'P3': {'vlans': [31], 'vm_offset': 3}, 'P2': {'vlans': [16], 'vm_offset': 4}, 'P4': {'vlans': [17], 'vm_offset': 5}}, len 6
2025-12-04 17:11:16.980492 : Run show interface status for range(0, 6)
2025-12-04 17:11:16.980505 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.51 show interface status'"
2025-12-04 17:11:20.363334 : Run show ip interface for range(0, 1)
2025-12-04 17:11:20.363373 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.125 show ip interface'"
2025-12-04 17:11:26.067869 : Current directory /root/workspace/PhoenixWingDailySRv6Test/na_lab/pytest_env
{'conf-name': 'vms-kvm-t0', 'group-name': 'vms6-1', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-64', 'group-name': 'vms6-1', 'topo': 't0-64', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-64-32', 'group-name': 'vms6-1', 'topo': 't0-64-32', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t1-lag', 'group-name': 'vms6-2', 'topo': 't1-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-02', 'ptf_ip': '10.250.0.106/24', 'ptf_ipv6': 'fec0::ffff:afa:6/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-03'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-2', 'group-name': 'vms6-3', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-03', 'ptf_ip': '10.250.0.108/24', 'ptf_ipv6': 'fec0::ffff:afa:8/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-04'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dual-t0', 'group-name': 'vms6-4', 'topo': 'dualtor', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-04', 'ptf_ip': '10.250.0.109/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'server': 'server_1', 'vm_base': 'VM0108', 'dut': ['vlab-05', 'vlab-06'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Dual-TOR testbed'}
Interface    Master    IPv4 address/mask    Admin/Oper    BGP Neighbor    Neighbor IP
-----------  --------  -------------------  ------------  --------------  -------------------------
Ethernet24   Vrf1      10.10.246.29/24      up/up         exabgp_v4       ('Vrf1', '10.10.246.254')
Loopback0              100.1.0.29/32        up/up         N/A             N/A
docker0                240.127.1.1/24       up/down       N/A             N/A
eth0                   10.250.0.51/24       up/up         N/A             N/A
lo                     127.0.0.1/16         up/up         N/A             N/A
{'conf-name': 'vms-kvm-multi-asic-t1-lag', 'group-name': 'vms6-4', 'topo': 't1-64-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-05', 'ptf_ip': '10.250.0.110/24', 'ptf_ipv6': 'fec0::ffff:afa:a/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-07'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests multi-asic virtual switch vm'}
{'conf-name': 'vms-kvm-four-asic-t1-lag', 'group-name': 'vms6-4', 'topo': 't1-8-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-05', 'ptf_ip': '10.250.0.110/24', 'ptf_ipv6': 'fec0::ffff:afa:a/64', 'server': 'server_1', 'vm_base': 'VM0128', 'dut': ['vlab-08'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests multi-asic virtual switch vm'}
{'conf-name': 'vms-kvm-t2', 'group-name': 'vms6-4', 'topo': 't2-vs', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-04', 'ptf_ip': '10.250.0.109/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-t2-01', 'vlab-t2-02', 'vlab-t2-sup'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'T2 Virtual chassis'}
{'conf-name': 'vms-kvm-t0-3', 'group-name': 'vms6-6', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-06', 'ptf_ip': '10.250.0.116/24', 'ptf_ipv6': 'fec0::ffff:afb:2/64', 'server': 'server_1', 'vm_base': 'VM0132', 'dut': ['vlab-09'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-4', 'group-name': 'vms6-7', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-07', 'ptf_ip': '10.250.0.118/24', 'ptf_ipv6': 'fec0::ffff:afb:4/64', 'server': 'server_1', 'vm_base': 'VM0136', 'dut': ['vlab-10'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dual-mixed', 'group-name': 'vms6-8', 'topo': 'dualtor-mixed', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-08', 'ptf_ip': '10.250.0.119/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'netns_mgmt_ip': '10.250.0.126/24', 'server': 'server_1', 'vm_base': 'VM0140', 'dut': ['vlab-11', 'vlab-12'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Dual-TOR-Mixed testbed'}
{'conf-name': '8000e-t0', 'group-name': 'vms8-1', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-8k-01', 'ptf_ip': '10.250.0.202/24', 'ptf_ipv6': 'fec0::ffff:afc:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-8k-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests 8000e sonic device'}
{'conf-name': '8000e-t1', 'group-name': 'vms8-1', 'topo': 't1', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-8k-01', 'ptf_ip': '10.250.0.202/24', 'ptf_ipv6': 'fec0::ffff:afc:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-8k-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests 8000e sonic device'}
{'conf-name': 'vms-kvm-wan-pub', 'group-name': 'vms6-1', 'topo': 'wan-pub', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-4link', 'group-name': 'vms6-1', 'topo': 'wan-4link', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-pub-cisco', 'group-name': 'vms6-1', 'topo': 'wan-pub-cisco', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-2dut', 'group-name': 'vms6-1', 'topo': 'wan-2dut', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01', 'vlab-03'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-3link-tg', 'group-name': 'vms6-1', 'topo': 'wan-3link-tg', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-ecmp', 'group-name': 'vms6-1', 'topo': 'wan-ecmp', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01', 'vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-pub-isis', 'group-name': 'vms6-1', 'topo': 'wan-pub-isis', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dpu', 'group-name': 'vms6-1', 'topo': 'dpu', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': False, 'comment': 'Tests virtual switch vm as DPU'}
{'conf-name': 'vms-kvm-ciscovs-7nodes', 'group-name': 'vms9-1', 'topo': 'ciscovs-7nodes', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-c-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes'}
{'conf-name': 'vms-kvm-ciscovs-5nodes', 'group-name': 'vms9-1', 'topo': 'ciscovs-5nodes', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-c-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 5 nodes'}
2025-12-04 17:11:26.133379 : /root/workspace/PhoenixWingDailySRv6Test/na_lab/pytest_env/../..//sonic-mgmt/ansible/vars/topo_ciscovs-7nodes.yml : vms_yml {'PE1': {'vlans': [28], 'vm_offset': 0}, 'PE2': {'vlans': [29], 'vm_offset': 1}, 'PE3': {'vlans': [30], 'vm_offset': 2}, 'P3': {'vlans': [31], 'vm_offset': 3}, 'P2': {'vlans': [16], 'vm_offset': 4}, 'P4': {'vlans': [17], 'vm_offset': 5}}, len 6
2025-12-04 17:11:26.133462 : Run show ip interface for range(0, 6)
2025-12-04 17:11:26.133475 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.51 show ip interface'"
2025-12-04 17:11:28.935578 : Run show interface portchannel for range(0, 1)
2025-12-04 17:11:28.935618 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.125 show interface portchannel'"
2025-12-04 17:11:32.948762 : Current directory /root/workspace/PhoenixWingDailySRv6Test/na_lab/pytest_env
{'conf-name': 'vms-kvm-t0', 'group-name': 'vms6-1', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-64', 'group-name': 'vms6-1', 'topo': 't0-64', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-64-32', 'group-name': 'vms6-1', 'topo': 't0-64-32', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t1-lag', 'group-name': 'vms6-2', 'topo': 't1-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-02', 'ptf_ip': '10.250.0.106/24', 'ptf_ipv6': 'fec0::ffff:afa:6/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-03'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-2', 'group-name': 'vms6-3', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-03', 'ptf_ip': '10.250.0.108/24', 'ptf_ipv6': 'fec0::ffff:afa:8/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-04'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dual-t0', 'group-name': 'vms6-4', 'topo': 'dualtor', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-04', 'ptf_ip': '10.250.0.109/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'server': 'server_1', 'vm_base': 'VM0108', 'dut': ['vlab-05', 'vlab-06'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Dual-TOR testbed'}
{'conf-name': 'vms-kvm-multi-asic-t1-lag', 'group-name': 'vms6-4', 'topo': 't1-64-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-05', 'ptf_ip': '10.250.0.110/24', 'ptf_ipv6': 'fec0::ffff:afa:a/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-07'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests multi-asic virtual switch vm'}
{'conf-name': 'vms-kvm-four-asic-t1-lag', 'group-name': 'vms6-4', 'topo': 't1-8-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-05', 'ptf_ip': '10.250.0.110/24', 'ptf_ipv6': 'fec0::ffff:afa:a/64', 'server': 'server_1', 'vm_base': 'VM0128', 'dut': ['vlab-08'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests multi-asic virtual switch vm'}
{'conf-name': 'vms-kvm-t2', 'group-name': 'vms6-4', 'topo': 't2-vs', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-04', 'ptf_ip': '10.250.0.109/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-t2-01', 'vlab-t2-02', 'vlab-t2-sup'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'T2 Virtual chassis'}
{'conf-name': 'vms-kvm-t0-3', 'group-name': 'vms6-6', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-06', 'ptf_ip': '10.250.0.116/24', 'ptf_ipv6': 'fec0::ffff:afb:2/64', 'server': 'server_1', 'vm_base': 'VM0132', 'dut': ['vlab-09'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-4', 'group-name': 'vms6-7', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-07', 'ptf_ip': '10.250.0.118/24', 'ptf_ipv6': 'fec0::ffff:afb:4/64', 'server': 'server_1', 'vm_base': 'VM0136', 'dut': ['vlab-10'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dual-mixed', 'group-name': 'vms6-8', 'topo': 'dualtor-mixed', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-08', 'ptf_ip': '10.250.0.119/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'netns_mgmt_ip': '10.250.0.126/24', 'server': 'server_1', 'vm_base': 'VM0140', 'dut': ['vlab-11', 'vlab-12'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Dual-TOR-Mixed testbed'}
{'conf-name': '8000e-t0', 'group-name': 'vms8-1', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-8k-01', 'ptf_ip': '10.250.0.202/24', 'ptf_ipv6': 'fec0::ffff:afc:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-8k-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests 8000e sonic device'}
{'conf-name': '8000e-t1', 'group-name': 'vms8-1', 'topo': 't1', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-8k-01', 'ptf_ip': '10.250.0.202/24', 'ptf_ipv6': 'fec0::ffff:afc:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-8k-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests 8000e sonic device'}
{'conf-name': 'vms-kvm-wan-pub', 'group-name': 'vms6-1', 'topo': 'wan-pub', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-4link', 'group-name': 'vms6-1', 'topo': 'wan-4link', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-pub-cisco', 'group-name': 'vms6-1', 'topo': 'wan-pub-cisco', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-2dut', 'group-name': 'vms6-1', 'topo': 'wan-2dut', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01', 'vlab-03'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-3link-tg', 'group-name': 'vms6-1', 'topo': 'wan-3link-tg', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-ecmp', 'group-name': 'vms6-1', 'topo': 'wan-ecmp', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01', 'vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-pub-isis', 'group-name': 'vms6-1', 'topo': 'wan-pub-isis', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dpu', 'group-name': 'vms6-1', 'topo': 'dpu', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': False, 'comment': 'Tests virtual switch vm as DPU'}
{'conf-name': 'vms-kvm-ciscovs-7nodes', 'group-name': 'vms9-1', 'topo': 'ciscovs-7nodes', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-c-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes'}
{'conf-name': 'vms-kvm-ciscovs-5nodes', 'group-name': 'vms9-1', 'topo': 'ciscovs-5nodes', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-c-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 5 nodes'}
Flags: A - active, I - inactive, Up - up, Dw - Down, N/A - not available,
       S - selected, D - deselected, * - not synced
No.    Team Dev    Protocol    Ports
-----  ----------  ----------  -------
2025-12-04 17:11:33.013931 : /root/workspace/PhoenixWingDailySRv6Test/na_lab/pytest_env/../..//sonic-mgmt/ansible/vars/topo_ciscovs-7nodes.yml : vms_yml {'PE1': {'vlans': [28], 'vm_offset': 0}, 'PE2': {'vlans': [29], 'vm_offset': 1}, 'PE3': {'vlans': [30], 'vm_offset': 2}, 'P3': {'vlans': [31], 'vm_offset': 3}, 'P2': {'vlans': [16], 'vm_offset': 4}, 'P4': {'vlans': [17], 'vm_offset': 5}}, len 6
2025-12-04 17:11:33.014068 : Run show interface portchannel for range(0, 6)
2025-12-04 17:11:33.014082 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.51 show interface portchannel'"
2025-12-04 17:11:36.469883 : Run show ip route for range(0, 1)
2025-12-04 17:11:36.469922 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.125 show ip route'"
2025-12-04 17:11:40.501432 : Current directory /root/workspace/PhoenixWingDailySRv6Test/na_lab/pytest_env
{'conf-name': 'vms-kvm-t0', 'group-name': 'vms6-1', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-64', 'group-name': 'vms6-1', 'topo': 't0-64', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-64-32', 'group-name': 'vms6-1', 'topo': 't0-64-32', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t1-lag', 'group-name': 'vms6-2', 'topo': 't1-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-02', 'ptf_ip': '10.250.0.106/24', 'ptf_ipv6': 'fec0::ffff:afa:6/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-03'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-2', 'group-name': 'vms6-3', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-03', 'ptf_ip': '10.250.0.108/24', 'ptf_ipv6': 'fec0::ffff:afa:8/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-04'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dual-t0', 'group-name': 'vms6-4', 'topo': 'dualtor', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-04', 'ptf_ip': '10.250.0.109/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'server': 'server_1', 'vm_base': 'VM0108', 'dut': ['vlab-05', 'vlab-06'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Dual-TOR testbed'}
{'conf-name': 'vms-kvm-multi-asic-t1-lag', 'group-name': 'vms6-4', 'topo': 't1-64-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-05', 'ptf_ip': '10.250.0.110/24', 'ptf_ipv6': 'fec0::ffff:afa:a/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-07'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests multi-asic virtual switch vm'}
{'conf-name': 'vms-kvm-four-asic-t1-lag', 'group-name': 'vms6-4', 'topo': 't1-8-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-05', 'ptf_ip': '10.250.0.110/24', 'ptf_ipv6': 'fec0::ffff:afa:a/64', 'server': 'server_1', 'vm_base': 'VM0128', 'dut': ['vlab-08'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests multi-asic virtual switch vm'}
{'conf-name': 'vms-kvm-t2', 'group-name': 'vms6-4', 'topo': 't2-vs', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-04', 'ptf_ip': '10.250.0.109/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-t2-01', 'vlab-t2-02', 'vlab-t2-sup'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'T2 Virtual chassis'}
{'conf-name': 'vms-kvm-t0-3', 'group-name': 'vms6-6', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-06', 'ptf_ip': '10.250.0.116/24', 'ptf_ipv6': 'fec0::ffff:afb:2/64', 'server': 'server_1', 'vm_base': 'VM0132', 'dut': ['vlab-09'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-4', 'group-name': 'vms6-7', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-07', 'ptf_ip': '10.250.0.118/24', 'ptf_ipv6': 'fec0::ffff:afb:4/64', 'server': 'server_1', 'vm_base': 'VM0136', 'dut': ['vlab-10'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dual-mixed', 'group-name': 'vms6-8', 'topo': 'dualtor-mixed', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-08', 'ptf_ip': '10.250.0.119/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'netns_mgmt_ip': '10.250.0.126/24', 'server': 'server_1', 'vm_base': 'VM0140', 'dut': ['vlab-11', 'vlab-12'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Dual-TOR-Mixed testbed'}
{'conf-name': '8000e-t0', 'group-name': 'vms8-1', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-8k-01', 'ptf_ip': '10.250.0.202/24', 'ptf_ipv6': 'fec0::ffff:afc:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-8k-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests 8000e sonic device'}
{'conf-name': '8000e-t1', 'group-name': 'vms8-1', 'topo': 't1', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-8k-01', 'ptf_ip': '10.250.0.202/24', 'ptf_ipv6': 'fec0::ffff:afc:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-8k-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests 8000e sonic device'}
{'conf-name': 'vms-kvm-wan-pub', 'group-name': 'vms6-1', 'topo': 'wan-pub', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-4link', 'group-name': 'vms6-1', 'topo': 'wan-4link', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-pub-cisco', 'group-name': 'vms6-1', 'topo': 'wan-pub-cisco', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-2dut', 'group-name': 'vms6-1', 'topo': 'wan-2dut', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01', 'vlab-03'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-3link-tg', 'group-name': 'vms6-1', 'topo': 'wan-3link-tg', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-ecmp', 'group-name': 'vms6-1', 'topo': 'wan-ecmp', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01', 'vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
Codes: K - kernel route, C - connected, S - static, R - RIP,
       O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
       T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
       F - PBR, f - OpenFabric,
       > - selected route, * - FIB route, q - queued route, r - rejected route

C>*10.250.0.0/24 is directly connected, eth0, 00:15:39
{'conf-name': 'vms-kvm-wan-pub-isis', 'group-name': 'vms6-1', 'topo': 'wan-pub-isis', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dpu', 'group-name': 'vms6-1', 'topo': 'dpu', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': False, 'comment': 'Tests virtual switch vm as DPU'}
{'conf-name': 'vms-kvm-ciscovs-7nodes', 'group-name': 'vms9-1', 'topo': 'ciscovs-7nodes', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-c-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes'}
{'conf-name': 'vms-kvm-ciscovs-5nodes', 'group-name': 'vms9-1', 'topo': 'ciscovs-5nodes', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-c-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 5 nodes'}
2025-12-04 17:11:40.568883 : /root/workspace/PhoenixWingDailySRv6Test/na_lab/pytest_env/../..//sonic-mgmt/ansible/vars/topo_ciscovs-7nodes.yml : vms_yml {'PE1': {'vlans': [28], 'vm_offset': 0}, 'PE2': {'vlans': [29], 'vm_offset': 1}, 'PE3': {'vlans': [30], 'vm_offset': 2}, 'P3': {'vlans': [31], 'vm_offset': 3}, 'P2': {'vlans': [16], 'vm_offset': 4}, 'P4': {'vlans': [17], 'vm_offset': 5}}, len 6
2025-12-04 17:11:40.568969 : Run show ip route for range(0, 6)
2025-12-04 17:11:40.568983 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.51 show ip route'"
2025-12-04 17:11:44.260923 : Run ip link for range(0, 1)
2025-12-04 17:11:44.260963 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.125 ip link'"
2025-12-04 17:11:48.788323 : Current directory /root/workspace/PhoenixWingDailySRv6Test/na_lab/pytest_env
{'conf-name': 'vms-kvm-t0', 'group-name': 'vms6-1', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-64', 'group-name': 'vms6-1', 'topo': 't0-64', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-64-32', 'group-name': 'vms6-1', 'topo': 't0-64-32', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t1-lag', 'group-name': 'vms6-2', 'topo': 't1-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-02', 'ptf_ip': '10.250.0.106/24', 'ptf_ipv6': 'fec0::ffff:afa:6/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-03'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-2', 'group-name': 'vms6-3', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-03', 'ptf_ip': '10.250.0.108/24', 'ptf_ipv6': 'fec0::ffff:afa:8/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-04'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dual-t0', 'group-name': 'vms6-4', 'topo': 'dualtor', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-04', 'ptf_ip': '10.250.0.109/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'server': 'server_1', 'vm_base': 'VM0108', 'dut': ['vlab-05', 'vlab-06'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Dual-TOR testbed'}
{'conf-name': 'vms-kvm-multi-asic-t1-lag', 'group-name': 'vms6-4', 'topo': 't1-64-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-05', 'ptf_ip': '10.250.0.110/24', 'ptf_ipv6': 'fec0::ffff:afa:a/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-07'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests multi-asic virtual switch vm'}
{'conf-name': 'vms-kvm-four-asic-t1-lag', 'group-name': 'vms6-4', 'topo': 't1-8-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-05', 'ptf_ip': '10.250.0.110/24', 'ptf_ipv6': 'fec0::ffff:afa:a/64', 'server': 'server_1', 'vm_base': 'VM0128', 'dut': ['vlab-08'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests multi-asic virtual switch vm'}
{'conf-name': 'vms-kvm-t2', 'group-name': 'vms6-4', 'topo': 't2-vs', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-04', 'ptf_ip': '10.250.0.109/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-t2-01', 'vlab-t2-02', 'vlab-t2-sup'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'T2 Virtual chassis'}
{'conf-name': 'vms-kvm-t0-3', 'group-name': 'vms6-6', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-06', 'ptf_ip': '10.250.0.116/24', 'ptf_ipv6': 'fec0::ffff:afb:2/64', 'server': 'server_1', 'vm_base': 'VM0132', 'dut': ['vlab-09'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-4', 'group-name': 'vms6-7', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-07', 'ptf_ip': '10.250.0.118/24', 'ptf_ipv6': 'fec0::ffff:afb:4/64', 'server': 'server_1', 'vm_base': 'VM0136', 'dut': ['vlab-10'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dual-mixed', 'group-name': 'vms6-8', 'topo': 'dualtor-mixed', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-08', 'ptf_ip': '10.250.0.119/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'netns_mgmt_ip': '10.250.0.126/24', 'server': 'server_1', 'vm_base': 'VM0140', 'dut': ['vlab-11', 'vlab-12'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Dual-TOR-Mixed testbed'}
{'conf-name': '8000e-t0', 'group-name': 'vms8-1', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-8k-01', 'ptf_ip': '10.250.0.202/24', 'ptf_ipv6': 'fec0::ffff:afc:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-8k-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests 8000e sonic device'}
{'conf-name': '8000e-t1', 'group-name': 'vms8-1', 'topo': 't1', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-8k-01', 'ptf_ip': '10.250.0.202/24', 'ptf_ipv6': 'fec0::ffff:afc:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-8k-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests 8000e sonic device'}
{'conf-name': 'vms-kvm-wan-pub', 'group-name': 'vms6-1', 'topo': 'wan-pub', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-4link', 'group-name': 'vms6-1', 'topo': 'wan-4link', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:a9:50:87 brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:1a:50:d3 brd ff:ff:ff:ff:ff:ff
4: eth2: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:fd:9a:49 brd ff:ff:ff:ff:ff:ff
5: eth3: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:22:c0:a4 brd ff:ff:ff:ff:ff:ff
6: eth4: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:24:dd:9e brd ff:ff:ff:ff:ff:ff
7: eth5: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:b2:d7:36 brd ff:ff:ff:ff:ff:ff
8: eth6: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:5c:0f:6e brd ff:ff:ff:ff:ff:ff
9: eth7: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:37:db:31 brd ff:ff:ff:ff:ff:ff
10: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default 
    link/ether 02:42:7f:ac:e8:cf brd ff:ff:ff:ff:ff:ff
11: swveth1@veth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 76:0d:7d:f7:4c:39 brd ff:ff:ff:ff:ff:ff
12: veth1@swveth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether ea:f2:cf:82:59:f2 brd ff:ff:ff:ff:ff:ff
13: swveth2@veth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 3a:9a:00:c7:c9:fd brd ff:ff:ff:ff:ff:ff
14: veth2@swveth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 76:03:db:54:83:09 brd ff:ff:ff:ff:ff:ff
15: swveth3@veth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 1e:3c:c7:6e:ac:51 brd ff:ff:ff:ff:ff:ff
16: veth3@swveth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether de:f2:05:60:75:b0 brd ff:ff:ff:ff:ff:ff
17: swveth4@veth4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether aa:13:f8:46:2a:35 brd ff:ff:ff:ff:ff:ff
18: veth4@swveth4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 3a:1d:e2:19:7e:0d brd ff:ff:ff:ff:ff:ff
19: swveth5@veth5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether fe:76:1e:91:74:f7 brd ff:ff:ff:ff:ff:ff
20: veth5@swveth5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 1a:54:74:6f:1f:71 brd ff:ff:ff:ff:ff:ff
21: swveth6@veth6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 82:1d:06:cf:b3:cb brd ff:ff:ff:ff:ff:ff
22: veth6@swveth6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 1e:85:f2:43:db:57 brd ff:ff:ff:ff:ff:ff
23: swveth7@veth7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether e2:c8:d0:30:0d:e6 brd ff:ff:ff:ff:ff:ff
24: veth7@swveth7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether b2:65:15:ba:d7:03 brd ff:ff:ff:ff:ff:ff
25: swveth8@veth8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether b6:ef:b5:04:18:58 brd ff:ff:ff:ff:ff:ff
26: veth8@swveth8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 5e:25:99:94:83:6e brd ff:ff:ff:ff:ff:ff
27: swveth9@veth9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether ea:21:15:e5:ca:7f brd ff:ff:ff:ff:ff:ff
28: veth9@swveth9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether c6:05:23:58:85:26 brd ff:ff:ff:ff:ff:ff
29: swveth10@veth10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 6e:d7:10:90:30:be brd ff:ff:ff:ff:ff:ff
30: veth10@swveth10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 62:03:55:36:78:50 brd ff:ff:ff:ff:ff:ff
31: swveth11@veth11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 9a:26:7f:c6:af:2a brd ff:ff:ff:ff:ff:ff
32: veth11@swveth11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 9a:bc:a1:c1:d8:f3 brd ff:ff:ff:ff:ff:ff
33: swveth12@veth12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether ae:c3:bc:5d:00:35 brd ff:ff:ff:ff:ff:ff
34: veth12@swveth12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 5e:64:32:4b:ac:28 brd ff:ff:ff:ff:ff:ff
35: swveth13@veth13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 26:7a:22:8a:3c:ba brd ff:ff:ff:ff:ff:ff
36: veth13@swveth13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 42:bb:3a:00:7a:db brd ff:ff:ff:ff:ff:ff
37: swveth14@veth14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether ce:32:45:c5:1d:44 brd ff:ff:ff:ff:ff:ff
38: veth14@swveth14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether ce:cf:62:f7:01:ea brd ff:ff:ff:ff:ff:ff
39: swveth15@veth15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 22:97:d9:47:0d:1f brd ff:ff:ff:ff:ff:ff
40: veth15@swveth15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 7a:0c:62:1b:22:ce brd ff:ff:ff:ff:ff:ff
41: swveth16@veth16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether fa:30:cb:05:70:da brd ff:ff:ff:ff:ff:ff
42: veth16@swveth16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 9e:1e:db:ef:0e:16 brd ff:ff:ff:ff:ff:ff
43: swveth17@veth17: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 42:6e:e7:8e:b7:0d brd ff:ff:ff:ff:ff:ff
44: veth17@swveth17: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether a6:a2:49:27:98:8c brd ff:ff:ff:ff:ff:ff
45: swveth18@veth18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 2a:ab:aa:98:5f:f0 brd ff:ff:ff:ff:ff:ff
46: veth18@swveth18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 46:0a:34:94:9d:c0 brd ff:ff:ff:ff:ff:ff
47: swveth19@veth19: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 5e:81:a4:12:1d:04 brd ff:ff:ff:ff:ff:ff
48: veth19@swveth19: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 62:d3:23:61:c2:d7 brd ff:ff:ff:ff:ff:ff
49: swveth20@veth20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether be:f7:1b:f6:e3:59 brd ff:ff:ff:ff:ff:ff
50: veth20@swveth20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 26:0d:67:ed:af:86 brd ff:ff:ff:ff:ff:ff
51: swveth21@veth21: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether ae:bd:37:60:a9:eb brd ff:ff:ff:ff:ff:ff
52: veth21@swveth21: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 42:dc:de:29:d5:ce brd ff:ff:ff:ff:ff:ff
53: swveth22@veth22: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 2e:8f:b8:e7:d3:9d brd ff:ff:ff:ff:ff:ff
54: veth22@swveth22: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 46:ea:1e:ef:9f:88 brd ff:ff:ff:ff:ff:ff
55: swveth23@veth23: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 02:54:0c:80:39:91 brd ff:ff:ff:ff:ff:ff
56: veth23@swveth23: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 6e:28:ad:02:69:78 brd ff:ff:ff:ff:ff:ff
57: swveth24@veth24: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 5e:9c:12:65:06:e3 brd ff:ff:ff:ff:ff:ff
58: veth24@swveth24: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether ae:7c:4e:26:8a:7d brd ff:ff:ff:ff:ff:ff
59: swveth25@veth25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 16:ec:a9:0f:58:ea brd ff:ff:ff:ff:ff:ff
60: veth25@swveth25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 52:27:8b:9d:88:56 brd ff:ff:ff:ff:ff:ff
61: swveth26@veth26: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 7e:6d:d1:09:3e:4c brd ff:ff:ff:ff:ff:ff
62: veth26@swveth26: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether b6:81:ab:5c:01:5a brd ff:ff:ff:ff:ff:ff
63: swveth27@veth27: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 3a:79:98:fb:ba:a9 brd ff:ff:ff:ff:ff:ff
64: veth27@swveth27: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 92:a9:ed:10:2a:a1 brd ff:ff:ff:ff:ff:ff
65: swveth28@veth28: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether d6:50:58:fe:00:a8 brd ff:ff:ff:ff:ff:ff
66: veth28@swveth28: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 22:40:39:13:51:00 brd ff:ff:ff:ff:ff:ff
67: swveth29@veth29: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 8a:c4:80:20:b1:6e brd ff:ff:ff:ff:ff:ff
68: veth29@swveth29: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 76:51:6d:82:a7:32 brd ff:ff:ff:ff:ff:ff
69: swveth30@veth30: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 76:08:b9:6d:d4:d5 brd ff:ff:ff:ff:ff:ff
70: veth30@swveth30: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 76:5f:e7:25:09:cf brd ff:ff:ff:ff:ff:ff
71: swveth31@veth31: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether ca:cd:12:44:9a:8b brd ff:ff:ff:ff:ff:ff
72: veth31@swveth31: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether b6:83:60:fb:b0:ee brd ff:ff:ff:ff:ff:ff
73: swveth32@veth32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether da:31:b9:0b:a9:4f brd ff:ff:ff:ff:ff:ff
74: veth32@swveth32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 22:06:db:86:25:d5 brd ff:ff:ff:ff:ff:ff
110: pimreg@NONE: <NOARP,UP,LOWER_UP> mtu 1472 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/pimreg 
111: Bridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:df:0c:4e brd ff:ff:ff:ff:ff:ff
112: Loopback0: <BROADCAST,NOARP,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether ae:20:e5:77:e5:d4 brd ff:ff:ff:ff:ff:ff
113: Vrf1: <NOARP,MASTER,UP,LOWER_UP> mtu 65575 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 72:39:c4:20:70:c5 brd ff:ff:ff:ff:ff:ff
114: dummy: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue master Bridge state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 0a:3c:ac:3a:40:e0 brd ff:ff:ff:ff:ff:ff
115: Vrf2: <NOARP,MASTER,UP,LOWER_UP> mtu 65575 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 26:24:d8:43:f8:98 brd ff:ff:ff:ff:ff:ff
116: Ethernet92: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:df:0c:4e brd ff:ff:ff:ff:ff:ff
117: Ethernet100: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:df:0c:4e brd ff:ff:ff:ff:ff:ff
118: Ethernet88: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:df:0c:4e brd ff:ff:ff:ff:ff:ff
119: Ethernet96: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:df:0c:4e brd ff:ff:ff:ff:ff:ff
120: Ethernet80: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:df:0c:4e brd ff:ff:ff:ff:ff:ff
121: Ethernet84: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:df:0c:4e brd ff:ff:ff:ff:ff:ff
122: Ethernet120: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:df:0c:4e brd ff:ff:ff:ff:ff:ff
123: Ethernet112: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:df:0c:4e brd ff:ff:ff:ff:ff:ff
124: Ethernet124: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:df:0c:4e brd ff:ff:ff:ff:ff:ff
125: Ethernet108: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:df:0c:4e brd ff:ff:ff:ff:ff:ff
126: Ethernet116: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:df:0c:4e brd ff:ff:ff:ff:ff:ff
127: Ethernet104: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:df:0c:4e brd ff:ff:ff:ff:ff:ff
128: Ethernet76: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:df:0c:4e brd ff:ff:ff:ff:ff:ff
129: Ethernet72: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:df:0c:4e brd ff:ff:ff:ff:ff:ff
130: Ethernet68: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:df:0c:4e brd ff:ff:ff:ff:ff:ff
131: Ethernet64: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:df:0c:4e brd ff:ff:ff:ff:ff:ff
132: Ethernet48: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:df:0c:4e brd ff:ff:ff:ff:ff:ff
133: Ethernet60: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:df:0c:4e brd ff:ff:ff:ff:ff:ff
134: Ethernet56: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:df:0c:4e brd ff:ff:ff:ff:ff:ff
135: Ethernet16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:df:0c:4e brd ff:ff:ff:ff:ff:ff
136: Ethernet52: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:df:0c:4e brd ff:ff:ff:ff:ff:ff
137: Ethernet20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:df:0c:4e brd ff:ff:ff:ff:ff:ff
138: Ethernet12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:df:0c:4e brd ff:ff:ff:ff:ff:ff
139: Ethernet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:df:0c:4e brd ff:ff:ff:ff:ff:ff
140: Ethernet8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:df:0c:4e brd ff:ff:ff:ff:ff:ff
141: Ethernet4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:df:0c:4e brd ff:ff:ff:ff:ff:ff
142: Ethernet24: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel master Vrf1 state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:df:0c:4e brd ff:ff:ff:ff:ff:ff
143: Ethernet40: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:df:0c:4e brd ff:ff:ff:ff:ff:ff
144: Ethernet44: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:df:0c:4e brd ff:ff:ff:ff:ff:ff
145: Ethernet36: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:df:0c:4e brd ff:ff:ff:ff:ff:ff
146: Ethernet28: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:df:0c:4e brd ff:ff:ff:ff:ff:ff
147: Ethernet32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:df:0c:4e brd ff:ff:ff:ff:ff:ff
148: pimreg1001@NONE: <NOARP,ALLMULTI,UP,LOWER_UP> mtu 1472 qdisc noqueue master Vrf1 state UNKNOWN mode DEFAULT group default qlen 1000
    link/pimreg 
{'conf-name': 'vms-kvm-wan-pub-cisco', 'group-name': 'vms6-1', 'topo': 'wan-pub-cisco', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-2dut', 'group-name': 'vms6-1', 'topo': 'wan-2dut', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01', 'vlab-03'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-3link-tg', 'group-name': 'vms6-1', 'topo': 'wan-3link-tg', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-ecmp', 'group-name': 'vms6-1', 'topo': 'wan-ecmp', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01', 'vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-pub-isis', 'group-name': 'vms6-1', 'topo': 'wan-pub-isis', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dpu', 'group-name': 'vms6-1', 'topo': 'dpu', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': False, 'comment': 'Tests virtual switch vm as DPU'}
{'conf-name': 'vms-kvm-ciscovs-7nodes', 'group-name': 'vms9-1', 'topo': 'ciscovs-7nodes', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-c-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes'}
{'conf-name': 'vms-kvm-ciscovs-5nodes', 'group-name': 'vms9-1', 'topo': 'ciscovs-5nodes', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-c-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 5 nodes'}
2025-12-04 17:11:48.852842 : /root/workspace/PhoenixWingDailySRv6Test/na_lab/pytest_env/../..//sonic-mgmt/ansible/vars/topo_ciscovs-7nodes.yml : vms_yml {'PE1': {'vlans': [28], 'vm_offset': 0}, 'PE2': {'vlans': [29], 'vm_offset': 1}, 'PE3': {'vlans': [30], 'vm_offset': 2}, 'P3': {'vlans': [31], 'vm_offset': 3}, 'P2': {'vlans': [16], 'vm_offset': 4}, 'P4': {'vlans': [17], 'vm_offset': 5}}, len 6
2025-12-04 17:11:48.852924 : Run ip link for range(0, 6)
2025-12-04 17:11:48.852937 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.51 ip link'"
2025-12-04 17:11:50.792604 : Run ip route for range(0, 1)
2025-12-04 17:11:50.792643 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.125 ip route'"
2025-12-04 17:11:54.774835 : Current directory /root/workspace/PhoenixWingDailySRv6Test/na_lab/pytest_env
{'conf-name': 'vms-kvm-t0', 'group-name': 'vms6-1', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-64', 'group-name': 'vms6-1', 'topo': 't0-64', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-64-32', 'group-name': 'vms6-1', 'topo': 't0-64-32', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t1-lag', 'group-name': 'vms6-2', 'topo': 't1-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-02', 'ptf_ip': '10.250.0.106/24', 'ptf_ipv6': 'fec0::ffff:afa:6/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-03'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-2', 'group-name': 'vms6-3', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-03', 'ptf_ip': '10.250.0.108/24', 'ptf_ipv6': 'fec0::ffff:afa:8/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-04'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dual-t0', 'group-name': 'vms6-4', 'topo': 'dualtor', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-04', 'ptf_ip': '10.250.0.109/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'server': 'server_1', 'vm_base': 'VM0108', 'dut': ['vlab-05', 'vlab-06'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Dual-TOR testbed'}
{'conf-name': 'vms-kvm-multi-asic-t1-lag', 'group-name': 'vms6-4', 'topo': 't1-64-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-05', 'ptf_ip': '10.250.0.110/24', 'ptf_ipv6': 'fec0::ffff:afa:a/64', 'server': 'server_1', 'vm_base': 'VM0104', 'dut': ['vlab-07'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests multi-asic virtual switch vm'}
{'conf-name': 'vms-kvm-four-asic-t1-lag', 'group-name': 'vms6-4', 'topo': 't1-8-lag', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-05', 'ptf_ip': '10.250.0.110/24', 'ptf_ipv6': 'fec0::ffff:afa:a/64', 'server': 'server_1', 'vm_base': 'VM0128', 'dut': ['vlab-08'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests multi-asic virtual switch vm'}
{'conf-name': 'vms-kvm-t2', 'group-name': 'vms6-4', 'topo': 't2-vs', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-04', 'ptf_ip': '10.250.0.109/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-t2-01', 'vlab-t2-02', 'vlab-t2-sup'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'T2 Virtual chassis'}
{'conf-name': 'vms-kvm-t0-3', 'group-name': 'vms6-6', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-06', 'ptf_ip': '10.250.0.116/24', 'ptf_ipv6': 'fec0::ffff:afb:2/64', 'server': 'server_1', 'vm_base': 'VM0132', 'dut': ['vlab-09'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-t0-4', 'group-name': 'vms6-7', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-07', 'ptf_ip': '10.250.0.118/24', 'ptf_ipv6': 'fec0::ffff:afb:4/64', 'server': 'server_1', 'vm_base': 'VM0136', 'dut': ['vlab-10'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
10.250.0.0/24 dev eth0 proto kernel scope link src 10.250.0.51 
240.127.1.0/24 dev docker0 proto kernel scope link src 240.127.1.1 linkdown 
{'conf-name': 'vms-kvm-dual-mixed', 'group-name': 'vms6-8', 'topo': 'dualtor-mixed', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-08', 'ptf_ip': '10.250.0.119/24', 'ptf_ipv6': 'fec0::ffff:afa:9/64', 'netns_mgmt_ip': '10.250.0.126/24', 'server': 'server_1', 'vm_base': 'VM0140', 'dut': ['vlab-11', 'vlab-12'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Dual-TOR-Mixed testbed'}
{'conf-name': '8000e-t0', 'group-name': 'vms8-1', 'topo': 't0', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-8k-01', 'ptf_ip': '10.250.0.202/24', 'ptf_ipv6': 'fec0::ffff:afc:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-8k-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests 8000e sonic device'}
{'conf-name': '8000e-t1', 'group-name': 'vms8-1', 'topo': 't1', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-8k-01', 'ptf_ip': '10.250.0.202/24', 'ptf_ipv6': 'fec0::ffff:afc:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-8k-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests 8000e sonic device'}
{'conf-name': 'vms-kvm-wan-pub', 'group-name': 'vms6-1', 'topo': 'wan-pub', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-4link', 'group-name': 'vms6-1', 'topo': 'wan-4link', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-pub-cisco', 'group-name': 'vms6-1', 'topo': 'wan-pub-cisco', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-2dut', 'group-name': 'vms6-1', 'topo': 'wan-2dut', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01', 'vlab-03'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-3link-tg', 'group-name': 'vms6-1', 'topo': 'wan-3link-tg', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-ecmp', 'group-name': 'vms6-1', 'topo': 'wan-ecmp', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01', 'vlab-02'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-wan-pub-isis', 'group-name': 'vms6-1', 'topo': 'wan-pub-isis', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual switch vm'}
{'conf-name': 'vms-kvm-dpu', 'group-name': 'vms6-1', 'topo': 'dpu', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-01'], 'inv_name': 'veos_vtb', 'auto_recover': False, 'comment': 'Tests virtual switch vm as DPU'}
{'conf-name': 'vms-kvm-ciscovs-7nodes', 'group-name': 'vms9-1', 'topo': 'ciscovs-7nodes', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-c-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 7 nodes'}
{'conf-name': 'vms-kvm-ciscovs-5nodes', 'group-name': 'vms9-1', 'topo': 'ciscovs-5nodes', 'ptf_image_name': 'docker-ptf', 'ptf': 'ptf-01', 'ptf_ip': '10.250.0.102/24', 'ptf_ipv6': 'fec0::ffff:afa:2/64', 'server': 'server_1', 'vm_base': 'VM0100', 'dut': ['vlab-c-01'], 'inv_name': 'veos_vtb', 'auto_recover': 'False', 'comment': 'Tests virtual cisco vs vm with 5 nodes'}
2025-12-04 17:11:54.840118 : /root/workspace/PhoenixWingDailySRv6Test/na_lab/pytest_env/../..//sonic-mgmt/ansible/vars/topo_ciscovs-7nodes.yml : vms_yml {'PE1': {'vlans': [28], 'vm_offset': 0}, 'PE2': {'vlans': [29], 'vm_offset': 1}, 'PE3': {'vlans': [30], 'vm_offset': 2}, 'P3': {'vlans': [31], 'vm_offset': 3}, 'P2': {'vlans': [16], 'vm_offset': 4}, 'P4': {'vlans': [17], 'vm_offset': 5}}, len 6
2025-12-04 17:11:54.840216 : Run ip route for range(0, 6)
2025-12-04 17:11:54.840230 : sshpass -p "123" ssh   -q -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" ubuntu@192.168.0.3 "timeout 20 docker exec  sonic-mgmt-test bash -c 'sshpass -p password ssh -q  -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" admin@10.250.0.51 ip route'"
2025-12-04 17:11:56.385627 : rm -rf /tmp/local_cache//1764839388.9176273/
--- 127.47047257423401 seconds ---
