ansible.executor package

Submodules

ansible.executor.module_common module

class ansible.executor.module_common.ModuleDepFinder(*args, **kwargs)[source]

Bases: ast.NodeVisitor

IMPORT_PREFIX_SIZE = 21
visit_Import(node)[source]
visit_ImportFrom(node)[source]
ansible.executor.module_common.recursive_finder(name, data, py_module_names, py_module_cache, zf)[source]

Using ModuleDepFinder, make sure we have all of the module_utils files that the module its module_utils files needs.

ansible.executor.module_common.modify_module(module_name, module_path, module_args, task_vars={}, module_compression='ZIP_STORED')[source]

Used to insert chunks of code into modules before transfer rather than doing regular python imports.

This allows for more efficient transfer in a non-bootstrapping scenario by not moving extra files over the wire and also takes care of embedding arguments in the transferred modules.

This version is done in such a way that local imports can still be used in the module code, so IDEs don’t have to be aware of what is going on.

Example:

from ansible.module_utils.basic import *

... will result in the insertion of basic.py into the module
from the module_utils/ directory in the source tree.

All modules are required to import at least basic, though there will also be other snippets.

For powershell, there’s equivalent conventions like this:

# POWERSHELL_COMMON

Which results in the inclusion of the common code from powershell.ps1

ansible.executor.play_iterator module

class ansible.executor.play_iterator.PlayIterator(inventory, play, play_context, variable_manager, all_vars, start_at_done=False)[source]

Bases: object

ITERATING_SETUP = 0
ITERATING_TASKS = 1
ITERATING_RESCUE = 2
ITERATING_ALWAYS = 3
ITERATING_COMPLETE = 4
FAILED_NONE = 0
FAILED_SETUP = 1
FAILED_TASKS = 2
FAILED_RESCUE = 4
FAILED_ALWAYS = 8
get_host_state(host)[source]
get_next_task_for_host(host, peek=False)[source]
mark_host_failed(host)[source]
get_failed_hosts()[source]
is_failed(host)[source]
get_original_task(host, task)[source]

Finds the task in the task list which matches the UUID of the given task.

The executor engine serializes/deserializes objects as they are passed through the different processes, and not all data structures are preserved. This method allows us to find the original task passed into the executor engine.

add_tasks(host, task_list)[source]

ansible.executor.playbook_executor module

class ansible.executor.playbook_executor.PlaybookExecutor(playbooks, inventory, variable_manager, loader, options, passwords)[source]

Bases: object

This is the primary class for executing playbooks, and thus the basis for bin/ansible-playbook operation.

run()[source]

Run the given playbook, based on the settings in the play which may limit the runs to serialized groups, etc.

ansible.executor.stats module

class ansible.executor.stats.AggregateStats[source]

Bases: object

holds stats about per-host activity during playbook runs

increment(what, host)[source]

helper function to bump a statistic

summarize(host)[source]

return information about a particular host

ansible.executor.task_executor module

class ansible.executor.task_executor.TaskExecutor(host, task, job_vars, play_context, new_stdin, loader, shared_loader_obj, rslt_q)[source]

Bases: object

Base worker class for the executor pipeline for loading an action plugin to dispatch the task to a given host.

This class roughly corresponds to the old Runner() class.

SQUASH_ACTIONS = frozenset(['dnf', 'pkgng', 'package', 'pacman', 'zypper', 'apt', 'apk', 'yum'])
run()[source]

Executor entrypoint for a task responsible for a single execute or a loop of executes.

Determine if the task requires looping.

If so, :TaskExecutor: runs the task with self._run_loop().

For the case of a single task, :self._execute:() is called.

The collected task results are parsed andreturned as a dict.

ansible.executor.task_queue_manager module

class ansible.executor.task_queue_manager.TaskQueueManager(inventory, variable_manager, loader, options, passwords, stdout_callback=None, run_additional_callbacks=True, run_tree=False)[source]

Bases: object

This class handles the multiprocessing requirements of Ansible by creating a pool of worker forks, a result handler fork, and a manager object with shared datastructures/queues for coordinating work between all processes.

The queue manager is responsible for loading the play strategy plugin, which dispatches the Play’s tasks to hosts.

RUN_OK = 0
RUN_ERROR = 1
RUN_FAILED_HOSTS = 2
RUN_UNREACHABLE_HOSTS = 3
RUN_FAILED_BREAK_PLAY = 4
RUN_UNKNOWN_ERROR = 255
load_callbacks()[source]

Loads all available callbacks, with the exception of those which utilize the CALLBACK_TYPE option. When CALLBACK_TYPE is set to ‘stdout’, only one such callback plugin will be loaded.

run(play)[source]

Iterates over the roles/tasks in a play, using the given (or default) strategy for queueing tasks. The default is the linear strategy, which operates like classic Ansible by keeping all hosts in lock-step with a given task (meaning no hosts move on to the next task until all hosts are done with the current task).

cleanup()[source]
clear_failed_hosts()[source]
get_inventory()[source]
get_variable_manager()[source]
get_loader()[source]
get_notified_handlers()[source]
get_workers()[source]
terminate()[source]
send_callback(method_name, *args, **kwargs)[source]

ansible.executor.task_result module

class ansible.executor.task_result.TaskResult(host, task, return_data)[source]

Bases: object

This class is responsible for interpretting the resulting data from an executed task, and provides helper methods for determining the result of a given task.

is_changed()[source]
is_skipped()[source]
is_failed()[source]
is_unreachable()[source]