欢迎访问 Fabric 中文版文档!

本站覆盖了 Fabric 的用法和 API 文档。若想了解 Fabric 是什么,包括它的更新日志和如何维护该项目,请访问 Fabric 项目官方网站

入门指导

对于新用户,与/或想大概了解 Fabric 的基本功能的同学,请看 入门导览 。本文档的其他部分将假设你至少已经大概熟悉其中的内容。

入门导览

欢迎来到 Fabric!

本文档走马观花式地介绍 Fabric 特性,也是对其使用的快速指导。其他文档(这里通篇的链接都指向它们)可以在 用法文档 里找到——请不要忘记也看看它们。

Fabric 是什么?

正如 README 所说:

Fabric 是一个 Python (2.5-2.7) 库和命令行工具,用来流水线化执行 SSH 以部署应用或系统管理任务。

更具体地说,Fabric 是:

  • 一个让你通过 命令行 执行 任意 Python 函数 的工具;
  • 一个让通过 SSH 执行 Shell 命令更加 容易蟒样 的子程序库(建立于一个更低层次的库)。

自然地,大部分用户把这两件事结合着用,使用 Fabric 来写和执行 Python 函数或 任务 ,以实现与远程服务器的自动化交互。让我们先睹为快吧。

你好, fab

如果没有下面这个国际惯例,这个文档恐怕不能算是个合格的入门指导:

def hello():
    print("Hello world!")

把上述代码放在你当前的工作目录中一个名为 fabfile.py 的 Python 模块文件中。然后这个 hello 函数就可以用安装 Fabric 时顺便装上的 fab 工具来执行了,它将如你所料地工作:

$ fab hello
Hello world!

Done.

这就是这个模块的所有作用。这个功能让 Fabric 可以作为一个极其基本的构建工具来使用,简单到甚至不用导入它的任何 API。

注解

这个 fab 简单地导入了你的 fabfile 并执行你定义的一个或多个函数。这里并没有任何魔术——任何你能在一个普通 Python 模块中做的事情都同样可以在一个 fabfile 中完成。

任务参数

就像你在常规的 Python 编程中那样,在执行任务时传递一些运行时参数经常能帮上大忙。Fabric 支持用兼容 Shell 的参数用法: <任务名>:<参数>, <关键字参数名>=<参数值>,... 虽然有点勉强,但可以扩展上面的例子,让它只向你 say hello:

def hello(name="world"):
    print("Hello %s!" % name)

默认情况下,调用 fab hello 仍然会像之前那样工作,但现在我们可以做些个性化定制了:

$ fab hello:name=Jeff
Hello Jeff!

Done.

用过 Python 编程的同学可能已经猜到了,这样调用也是完全一样的:

$ fab hello:Jeff
Hello Jeff!

Done.

目前,参数值只能作为 Python 字符串来使用,如果要使用复杂类型,例如列表,会需要一些字符串操作处理。将来的版本可能会添加一个类型转换系统,以简化这类处理。

本地命令

在前面的例子中, fab 实际上只节省了数行 if __name__ == "__main__" 这样的固定样板代码而已。Fabric 更多地被设计为使用它自己的 API,它们包括执行 Shell 命令、传送文件等等的函数(或 操作 )。

我们以一个 Web 应用为例来创建一个 fabfile。具体的情景如下:这个 Web 应用用一台远程服务器 vcshost 上的 Git 管理代码,我们把它的代码库克隆到了本地 localhost 中。当我们把修改后的代码 push 回 vcshost 的时候,我们想自动就立即把新的版本安装到另一台远程服务器 my_server 上。我们将用自动化的本地和远程 Git 命令来完成这些工作。

fabfile 文件最好放在一个项目的根目录:

.
|-- __init__.py
|-- app.wsgi
|-- fabfile.py <-- our fabfile!
|-- manage.py
`-- my_app
    |-- __init__.py
    |-- models.py
    |-- templates
    |   `-- index.html
    |-- tests.py
    |-- urls.py
    `-- views.py

注解

我们在这里用的是一个 Django 应用,但这仅仅是个例子——Fabric 并未与任何外部代码绑定,除了它的 SSH 库。

作为起步,可能我们希望先执行测试,然后再提交到 VCS(版本控制系统),为部署作好准备:

from fabric.api import local

def prepare_deploy():
    local("./manage.py test my_app")
    local("git add -p && git commit")
    local("git push")

这段代码的输出会是这样:

$ fab prepare_deploy
[localhost] run: ./manage.py test my_app
Creating test database...
Creating tables
Creating indexes
..........................................
----------------------------------------------------------------------
Ran 42 tests in 9.138s

OK
Destroying test database...

[localhost] run: git add -p && git commit

<interactive Git add / git commit edit message session>

[localhost] run: git push

<git push session, possibly merging conflicts interactively>

Done.

这段代码很简单,导入一个 Fabric API: ~fabric.operations.local ,然后用它执行本地 Shell 命令并与之交互,剩下的 Fabric API 也是类似的——它们都只是 Python 而已。

用你的方式来组织

因为 Fabric“只是 Python”,你可以以你想要的任何方式来组织你的 fabfile。例如,把任务分割成多个子任务:

from fabric.api import local

def test():
    local("./manage.py test my_app")

def commit():
    local("git add -p && git commit")

def push():
    local("git push")

def prepare_deploy():
    test()
    commit()
    push()

这个 prepare_deploy 任务仍可以像之前那样调用,但现在只要你想,就可以调用更细粒度的子任务了。

故障

我们的基本案例已经可以正常工作了,但如果测试失败了会发生什么事?没准我们想来个急刹车,并在部署之前修复这些失败的测试。

Fabric 会检查被调用程序的返回值,如果这些程序没有干净地退出,Fabric 会放弃操作。下面我们就来看看如果一个测试用例遇到错误时会发生什么事:

$ fab prepare_deploy
[localhost] run: ./manage.py test my_app
Creating test database...
Creating tables
Creating indexes
.............E............................
======================================================================
ERROR: testSomething (my_project.my_app.tests.MainTests)
----------------------------------------------------------------------
Traceback (most recent call last):
[...]

----------------------------------------------------------------------
Ran 42 tests in 9.138s

FAILED (errors=1)
Destroying test database...

Fatal error: local() encountered an error (return code 2) while executing './manage.py test my_app'

Aborting.

太好了!我们什么都不用做,Fabric 检测到了错误并放弃了操作,不会继续执行 commit 任务。

故障处理

但如果我们想更加灵活,给用户另一个选择,又该怎么办呢?一个名为 warn_only 的设置(或 环境变量 ,经常缩写为 env var )可以把放弃变成警告,使得灵活处理错误成为现实。

让我们把这个设置丢到我们的 test 函数中,然后看看这个 ~fabric.operations.local 调用的结果如何:

from __future__ import with_statement
from fabric.api import local, settings, abort
from fabric.contrib.console import confirm

def test():
    with settings(warn_only=True):
        result = local('./manage.py test my_app', capture=True)
    if result.failed and not confirm("Tests failed. Continue anyway?"):
        abort("Aborting at user request.")

[...]

在添加这个新特性时,我们引入了一些新东西:

  • 在 Python 2.5 中,需要导入 __future__ 才能使用 with
  • Fabric 的 contrib.console <fabric.contrib.console> 子模块包含了 ~fabric.contrib.console.confirm 函数,用来做简单的 yes/no 提示;
  • 上下文管理器 ~fabric.context_managers.settings 用来将设置应用到某个特定的代码块中;
  • 运行命令的操作,如 ~fabric.operations.local ,可以返回一个包含该操作的结果信息(例如 .failed.return_code )的对象;
  • 还有 ~fabric.utils.abort 函数,可以用来手工取消执行。

然而,即使在增加了上述复杂度之后,整个处理过程仍然很容易理解,而且它已经远比之前灵活了。

建立连接

我们开始让 fabfile 回到主旨吧:定义一个 deploy 任务,让它在一台或多台远程服务器上运行,并保证代码是最新的:

def deploy():
    code_dir = '/srv/django/myproject'
    with cd(code_dir):
        run("git pull")
        run("touch app.wsgi")

这里再次引入了一些新的概念:

  • Fabric 就是 Python——所以我们可以自由地使用变量、字符串等常规的 Python 代码结构;
  • ~fabric.context_managers.cd 是一个很方便的前缀命令,相当于执行 cd /to/some/directory 命令,这个命令和 ~fabric.context_managers.lcd 一样,不过后者针对本地;
  • ~fabric.operations.run 则和 ~fabric.operations.local 类似,但它是运行 远程 命令而非本地。

我们还需要确认在文件顶部导入了新的函数:

from __future__ import with_statement
from fabric.api import local, settings, abort, run, cd
from fabric.contrib.console import confirm

改好之后,我们再来部署:

$ fab deploy
No hosts found. Please specify (single) host string for connection: my_server
[my_server] run: git pull
[my_server] out: Already up-to-date.
[my_server] out:
[my_server] run: touch app.wsgi

Done.

我们从来没有在 fabfile 中指定任何连接信息,所以 Fabric 不知道该在哪里运行那些远程命令。当遇到这种情况,Fabric 会在运行时提示我们。连接的定义使用 SSH 风格的“主机串”(例如: user@host:port ),默认使用你的本地用户名——所以在这个例子中,我们只需要指定主机名 my_server

远程交互

如果你已经签出过代码, git pull 就能很好地工作——但如果这是第一次部署呢?如果还能用 git clone 来处理这种情况那才叫棒呢:

def deploy():
    code_dir = '/srv/django/myproject'
    with settings(warn_only=True):
        if run("test -d %s" % code_dir).failed:
            run("git clone user@vcshost:/path/to/repo/.git %s" % code_dir)
    with cd(code_dir):
        run("git pull")
        run("touch app.wsgi")

就像我们上面调用 ~fabric.operations.local 一样, ~fabric.operations.run 也让我们基于 Shell 命令构建干净的 Python 层逻辑。然后这里最有趣的部分是 git clone :因为我们是用 Git 的 SSH 方法来访问 Git 服务器上的代码库,这意味着我们的远程 ~fabric.operations.run 调用本身需要身份验证。

旧版本的 Fabric(和其他类似的高层次 SSH 库)像在监狱里一样运行远程命令,无法在本地交互。当你很迫切需要输入密码或与远程程序交互时,这就很成问题。

Fabric 1.0 和后续的版本突破了这个限制,并保证你总是能和另一边对话。让我们看看当我们在一台没有 Git checkout 的新服务器上运行更新后的 deploy 任务时会发生什么:

$ fab deploy
No hosts found. Please specify (single) host string for connection: my_server
[my_server] run: test -d /srv/django/myproject

Warning: run() encountered an error (return code 1) while executing 'test -d /srv/django/myproject'

[my_server] run: git clone user@vcshost:/path/to/repo/.git /srv/django/myproject
[my_server] out: Cloning into /srv/django/myproject...
[my_server] out: Password: <enter password>
[my_server] out: remote: Counting objects: 6698, done.
[my_server] out: remote: Compressing objects: 100% (2237/2237), done.
[my_server] out: remote: Total 6698 (delta 4633), reused 6414 (delta 4412)
[my_server] out: Receiving objects: 100% (6698/6698), 1.28 MiB, done.
[my_server] out: Resolving deltas: 100% (4633/4633), done.
[my_server] out:
[my_server] run: git pull
[my_server] out: Already up-to-date.
[my_server] out:
[my_server] run: touch app.wsgi

Done.

注意那个 Password: 提示——那就是我们在 Web 服务器上的远程 git 调用在询问 Git 密码。我们可以在里面输入密码,然后像往常一样继续克隆。

预先定义连接

在运行时输入连接信息已经落后太多了,所以 Fabric 提供了一种方便的办法,在你的 fabfile 或命令行中指定。我们不打算在这里完全展开来说,但我们会向你展示最常用的:设置全局主机列表 env.hosts

env 是一个全局的类字典对象,驱动着 Fabric 的大部分设置,而且可以带着属性写进去(事实上,前面见过的 ~fabric.context_managers.settings 是它的一个简单包装)。因此,我们可以在模块层次上,在 fabfile 的顶部附近修改它,就像这样:

from __future__ import with_statement
from fabric.api import *
from fabric.contrib.console import confirm

env.hosts = ['my_server']

def test():
    do_test_stuff()

fab 加载我们的 fabfile 时,我们对 env 的修改将被执行,并保存为对设置的修改。最终的结果就如上面所示:我们的 deploy 任务将在 my_server 上运行。

这也是你如何告诉 Fabric 一次在多台远程服务器上运行的方法:因为 env.hosts 是一个列表, fab 对它进行迭代,为每个连接调用指定的任务。

小结

在经过了这么多,我们的完整的 fabfile 文件仍然相当短。下面是它的完整内容:

from __future__ import with_statement
from fabric.api import *
from fabric.contrib.console import confirm

env.hosts = ['my_server']

def test():
    with settings(warn_only=True):
        result = local('./manage.py test my_app', capture=True)
    if result.failed and not confirm("Tests failed. Continue anyway?"):
        abort("Aborting at user request.")

def commit():
    local("git add -p && git commit")

def push():
    local("git push")

def prepare_deploy():
    test()
    commit()
    push()

def deploy():
    code_dir = '/srv/django/myproject'
    with settings(warn_only=True):
        if run("test -d %s" % code_dir).failed:
            run("git clone user@vcshost:/path/to/repo/.git %s" % code_dir)
    with cd(code_dir):
        run("git pull")
        run("touch app.wsgi")

这个 fabfile 使用了 Fabric 的相当一大部分特性集:

  • 定义 fabfile 任务,并用 fab 运行;
  • ~fabric.operations.local 调用本地 Shell 命令;
  • ~fabric.context_managers.settings 修改环境变量;
  • 处理命令故障、提示用户及手工取消;
  • 还有定义主机列表和以 ~fabric.operations.run 运行远程命令。

然而,还有更多内容没有在这里覆盖。你还可以看看所有“参见”中提供的链接,和文档内容 索引 表。

能看到这里真不容易,谢谢!

用法文档

下面的列表包含了 Fabric 散文(非 API)文档的所有主要章节。这些内容在 入门导览 中提到的概念基础上进行了扩展,并覆盖了一些进阶主题。

环境字典 env

一个简单但又是 Fabric 组成部分的概念是“环境”:一个 Python 字典子类,被作为组合设置的注册表,和内部任务的共享数据命名空间。

环境字典目前被实现为一个全局单例 fabric.state.env,为方便起见,也被包含在 fabric.api 中。env 中的键也经常被称为“环境变量”。

环境与配置

Fabric 的大部分行为可以通过修改 env 变量来控制,例如 env.hosts (已经在 入门导览 中见过)。其他经常需要修改的环境变量包括:

  • user:Fabric 默认使用你本地用户名去建立 SSH 连接,但如果有必要,你可以用 env.user 来覆写它。文档 Execution model 部分也有关于如何针对每个主机设置用户名的信息。
  • password:用来显式设置你的默认连接或 sudo 密码。如果没有设置密码或设置了不正确的密码,Fabric 将会提示你输入。
  • warn_only:一个布尔值设置,用来表明 Fabric 是否在检测到远程错误时退出。查看 Execution model 以了解更多关于此行为的信息。

还有许多环境变量,可以查看本文档末尾的 环境变量完整列表 完整列表。

~fabric.context_managers.settings 上下文管理器

在很多情况下,临时修改 env 变量以使某些指定的设置只应用到部分代码块是很有用的。Fabric 提供了一个 ~fabric.context_managers.settings 上下文管理器,它可以接受一个或多个键/值对参数,并用来修改它所包裹的代码块范围内的 env

例如,很多情况下,设置 warn_only 是很有用的(见下文)。要将它应用到几行代码中,可以用 settings(warn_only=True)。正如下面这个简化版的 contrib ~fabric.contrib.files.exists 函数:

from fabric.api import settings, run

def exists(path):
    with settings(warn_only=True):
        return run('test -e %s' % path)

查看 Context Managers API 文档以了解关于 ~fabric.context_managers.settings 和其他类似工具的细节。

共享状态环境

前面提到,env 对象简单来说就是个字典子类,所以你的 fabfile 代码也可以在这里保存信息。有些时候,这对于在一次运行的多个任务中保持状态很有用。

注解

这个 env 是很有历史的:以前的 fabfile 不是纯 Python,所以环境不是在任务间通信的唯一方式。现在,你可以直接调用其他任务或子路径,并保持模块级别的共享状态。

在未来的版本,Fabric 将变得线程安全,在这点上,env 将可能会是保持全局状态的唯一简单/安全的方式。

其他考虑

在继承 dict 的同时,Fabric 的 env 也作了些修改,以使它的值可以通过属性访问的方式来读/写,这在前文也有所见。换句话来说,env.host_stringenv['host_string'] 的作用是完全一样的。我们感觉属性访问经常可以节省一些打字,并使代码的可读性更高,所以这也是与 env 交互的推荐方式。

它是个字典的事实也在其他方面很有用,例如用 Python 的基于 dict 的字符串替代法,可以在你需要在一个字符串中插入多个环境变量值的时候显得尤其方便。使用“普通”的字符串替代法可能就像这样:

print("Executing on %s as %s" % (env.host, env.user))

使用字典风格的字符串替代法就更加可读而且简洁:

print("Executing on %(host)s as %(user)s" % env)

环境变量完整列表

以下是所有预定义(或在 Fabric 运行时自己定义)的环境变量的完整列表。它们中的大部分都可以直接操作,但最好还是使用 ~fabric.context_managers,可以通过 ~fabric.context_managers.settings 或特定的上下文管理器,如 ~fabric.context_managers.cd

需注意的是它们中的大部分可以通过 fab 的命令行参数来设置,更多细节请参考 fab options and arguments。在下文相应的地方也提供了交叉引用链接。

参见

--set

abort_exception

默认值: None

正常情况下,Fabric 执行放弃操作的步骤是先将错误信息打印到标准错误输出,然后调用 sys.exit(1)。这个设置允许你覆写这个默认行为(即 env.abort_exceptionNone 时发生的事)。

给它一个可调用的对象,它可以接受一个字符串(原来将被打印的错误信息),并返回一个异常实例。这个异常对象将被抛出,以代替(原来的 sys.exit 执行的) SystemExit

大部分情况下,你可以简单地将它设置为一个异常类,因为它完美符合了上面的描述(可调用、接受一个字符串、返回一个异常实例),例如 env.abort_exception = MyExceptionClass

abort_on_prompts

默认值: False

当这个值为 True 时,Fabric 将以非交互模式运行。此模式下,任何需要提示用户输入(如提示输入密码、询问连接到哪个主机、fabfile 中触发的 ~fabric.operations.prompt 等等)的时候,都会调用 ~fabric.utils.abort。这就允许用户确保 Fabric 会话总是清楚地中止,而不是在某些预料之外的情况发生时,仍傻傻地一直在等待用户输入。

1.1 新版功能.

all_hosts

默认值: []

fab 设置的当前正在执行的命令的完整主机列表。仅供显示信息。

always_use_pty

默认值: True

当设置为 False 时,会使 ~fabric.operations.run/~fabric.operations.sudo 的行为像它们被用 pty=False 参数调用时一样。

参见

--no-pty

1.0 新版功能.

colorize_errors

默认值: False

当被设置为 True 时,输出到终端的错误信息会显示成红色,警告信息则显示为洋红色,以使它们更容易被看见。

1.7 新版功能.

combine_stderr

默认值:: True

使 SSH 层合并远程程序的 stdout 和 stderr 流输出,以避免它们在打印时混在一起。查看 Combining stdout and stderr 以了解为什么需要这个功能,及它的效果是怎样的。

1.0 新版功能.

command

默认值: None

fab 设置的当前正在执行的命令名称(例如,执行 $ fab task1 task2 命令,当执行 task1 时, env.command 会被设置为 "task1" ,然后设置为 "task2" )。仅供显示信息。

command_prefixes

默认值: []

~fabric.context_managers.prefix 修改,并附加在由 ~fabric.operations.run/~fabric.operations.sudo 执行的命令前面。

1.0 新版功能.

command_timeout

默认值: None

远程命令的超时时间,单位为秒。

1.6 新版功能.

connection_attempts

默认值: 1

Fabric 连接一台新服务器的重试次数。出于向后兼容的原因,它的默认值是只尝试连接一次。

1.4 新版功能.

cwd

默认值: ''

当前工作目录,用来为 ~fabric.context_managers.cd 上下文管理器保持状态。

dedupe_hosts

默认值: True

去除合并后的主机列表中的重复项,以使任一个主机串只出现一次(例如,当使用 @hosts + @roles ,或 -H-R 的组合的时候)。

当被设置为 False ,就不会去除重复项,这将允许用户显式地在同一台主机上将一个任务(以串行或并行方式)运行多次。

1.5 新版功能.

disable_known_hosts

默认值: False

如果为 True SSH 层会跳过用户的 know-hosts 文件不加载。这样可以有效地避免当一个“已知主机”改变了 key、但仍然有效(云服务器,例如 EC2)时的异常。

eagerly_disconnect

默认值: False

当它为 True 时, fab 会在每个单独的任务完成后关闭连接,而不是在整个运行结束后。这可以帮助避免堆积大量无用的网络会话、或因每个进程可打开的文件或网络硬件的限制而导致问题。

注解

当打开这个设置时,断开连接的信息会遍布于你的输出信息始终,而不是在最后。以后的版本可能会改进这一点。

effective_roles

默认值: []

fab 设置的当前正在执行的命令的角色列表。仅供显示信息。

exclude_hosts

默认值: []

指定一个主机串列表,以在 fab 执行过程中 skipped over 。通常通过 --exclude-hosts/-x 来设置。

1.1 新版功能.

fabfile

默认值: fabfile.py

fab 加载 fabfile 时查找的文件名。要指定一个特定的 fabfile 文件,需要使用该文件的完整路径。显然,不可能在 fabfile 中设置这个参数,但它可以在一个 .fabricrc 文件设置,或通过命令行参数设置。

gateway

默认值: None

允许通过指定主机创建 SSH 驱动的网关。它的值应该是一个普通的 Fabric 主机串,和在 env.host_string 中使用的一样。当它被设置时,新创建的连接将会通过这个远程 SSH 连接到最终的目的地。

1.5 新版功能.

参见

--gateway

host_string

默认值: None

定义了 Fabric 在执行 ~fabric.operations.run~fabric.operations.put 等命令时使用的用户/主机/端口。它可以由 fab 在与已设置的主机列表交互时设置,也可以在将 Fabric 作为一个库使用时手工设置。

forward_agent

默认值: False

If True, enables forwarding of your local SSH agent to the remote end.

1.4 新版功能.

host

默认值: None

Set to the hostname part of env.host_string by fab. For informational purposes only.

hosts

默认值: []

The global host list used when composing per-task host lists.

keepalive

默认值: 0 (i.e. no keepalive)

An integer specifying an SSH keepalive interval to use; basically maps to the SSH config option ClientAliveInterval. Useful if you find connections are timing out due to meddlesome network hardware or what have you.

参见

--keepalive

1.1 新版功能.

key

默认值: None

A string, or file-like object, containing an SSH key; used during connection authentication.

注解

The most common method for using SSH keys is to set key_filename.

1.7 新版功能.

key_filename

默认值: None

May be a string or list of strings, referencing file paths to SSH key files to try when connecting. Passed through directly to the SSH layer. May be set/appended to with -i.

linewise

默认值: False

Forces buffering by line instead of by character/byte, typically when running in parallel mode. May be activated via --linewise. This option is implied by env.parallel – even if linewise is False, if parallel is True then linewise behavior will occur.

1.3 新版功能.

local_user

A read-only value containing the local system username. This is the same value as user‘s initial value, but whereas user may be altered by CLI arguments, Python code or specific host strings, local_user will always contain the same value.

no_agent

默认值: False

If True, will tell the SSH layer not to seek out running SSH agents when using key-based authentication.

0.9.1 新版功能.

参见

--no_agent

no_keys

默认值: False

If True, will tell the SSH layer not to load any private key files from one’s $HOME/.ssh/ folder. (Key files explicitly loaded via fab -i will still be used, of course.)

0.9.1 新版功能.

参见

-k

parallel

默认值: False

When True, forces all tasks to run in parallel. Implies env.linewise.

1.3 新版功能.

password

默认值: None

The default password used by the SSH layer when connecting to remote hosts, and/or when answering ~fabric.operations.sudo prompts.

passwords

默认值: {}

This dictionary is largely for internal use, and is filled automatically as a per-host-string password cache. Keys are full host strings and values are passwords (strings).

1.0 新版功能.

path

默认值: ''

Used to set the $PATH shell environment variable when executing commands in ~fabric.operations.run/~fabric.operations.sudo/~fabric.operations.local. It is recommended to use the ~fabric.context_managers.path context manager for managing this value instead of setting it directly.

1.0 新版功能.

pool_size

默认值: 0

Sets the number of concurrent processes to use when executing tasks in parallel.

1.3 新版功能.

prompts

默认值: {}

The prompts dictionary allows users to control interactive prompts. If a key in the dictionary is found in a command’s standard output stream, Fabric will automatically answer with the corresponding dictionary value.

1.9 新版功能.

port

默认值: None

Set to the port part of env.host_string by fab when iterating over a host list. May also be used to specify a default port.

real_fabfile

默认值: None

Set by fab with the path to the fabfile it has loaded up, if it got that far. For informational purposes only.

remote_interrupt

默认值: None

Controls whether Ctrl-C triggers an interrupt remotely or is captured locally, as follows:

  • None (the default): only ~fabric.operations.open_shell will exhibit remote interrupt behavior, and ~fabric.operations.run/~fabric.operations.sudo will capture interrupts locally.
  • False: even ~fabric.operations.open_shell captures locally.
  • True: all functions will send the interrupt to the remote end.

1.6 新版功能.

rcfile

默认值: $HOME/.fabricrc

Path used when loading Fabric’s local settings file.

reject_unknown_hosts

默认值: False

If True, the SSH layer will raise an exception when connecting to hosts not listed in the user’s known-hosts file.

system_known_hosts

默认值: None

If set, should be the path to a known_hosts file. The SSH layer will read this file before reading the user’s known-hosts file.

参见

SSH behavior

roledefs

默认值: {}

Dictionary defining role name to host list mappings.

roles

默认值: []

The global role list used when composing per-task host lists.

shell

默认值: /bin/bash -l -c

Value used as shell wrapper when executing commands with e.g. ~fabric.operations.run. Must be able to exist in the form <env.shell> "<command goes here>" – e.g. the default uses Bash’s -c option which takes a command string as its value.

参见

--shell, FAQ on bash as default shell, Execution model

skip_bad_hosts

默认值: False

If True, causes fab (or non-fab use of ~fabric.tasks.execute) to skip over hosts it can’t connect to.

1.4 新版功能.

ssh_config_path

默认值: $HOME/.ssh/config

Allows specification of an alternate SSH configuration file path.

1.4 新版功能.

ok_ret_codes

默认值: [0]

Return codes in this list are used to determine whether calls to ~fabric.operations.run/~fabric.operations.sudo/~fabric.operations.sudo are considered successful.

1.6 新版功能.

sudo_prefix

默认值: "sudo -S -p '%(sudo_prompt)s' " % env

The actual sudo command prefixed onto ~fabric.operations.sudo calls’ command strings. Users who do not have sudo on their default remote $PATH, or who need to make other changes (such as removing the -p when passwordless sudo is in effect) may find changing this useful.

参见

The ~fabric.operations.sudo operation; env.sudo_prompt

sudo_prompt

默认值: "sudo password:"

Passed to the sudo program on remote systems so that Fabric may correctly identify its password prompt.

参见

The ~fabric.operations.sudo operation; env.sudo_prefix

sudo_user

默认值: None

Used as a fallback value for ~fabric.operations.sudo‘s user argument if none is given. Useful in combination with ~fabric.context_managers.settings.

参见

~fabric.operations.sudo

tasks

默认值: []

Set by fab to the full tasks list to be executed for the currently executing command. For informational purposes only.

timeout

默认值: 10

Network connection timeout, in seconds.

1.4 新版功能.

use_shell

默认值: True

Global setting which acts like the shell argument to ~fabric.operations.run/~fabric.operations.sudo: if it is set to False, operations will not wrap executed commands in env.shell.

use_ssh_config

默认值: False

Set to True to cause Fabric to load your local SSH config file.

1.4 新版功能.

user

默认值: User’s local username

The username used by the SSH layer when connecting to remote hosts. May be set globally, and will be used when not otherwise explicitly set in host strings. However, when explicitly given in such a manner, this variable will be temporarily overwritten with the current value – i.e. it will always display the user currently being connected as.

To illustrate this, a fabfile:

from fabric.api import env, run

env.user = 'implicit_user'
env.hosts = ['host1', 'explicit_user@host2', 'host3']

def print_user():
    with hide('running'):
        run('echo "%(user)s"' % env)

and its use:

$ fab print_user

[host1] out: implicit_user
[explicit_user@host2] out: explicit_user
[host3] out: implicit_user

Done.
Disconnecting from host1... done.
Disconnecting from host2... done.
Disconnecting from host3... done.

As you can see, during execution on host2, env.user was set to "explicit_user", but was restored to its previous value ("implicit_user") afterwards.

注解

env.user is currently somewhat confusing (it’s used for configuration and informational purposes) so expect this to change in the future – the informational aspect will likely be broken out into a separate env variable.

version

默认值: current Fabric version string

Mostly for informational purposes. Modification is not recommended, but probably won’t break anything either.

参见

--version

warn_only

默认值: False

Specifies whether or not to warn, instead of abort, when ~fabric.operations.run/~fabric.operations.sudo/~fabric.operations.local encounter error conditions.

Execution model

If you’ve read the 入门导览, you should already be familiar with how Fabric operates in the base case (a single task on a single host.) However, in many situations you’ll find yourself wanting to execute multiple tasks and/or on multiple hosts. Perhaps you want to split a big task into smaller reusable parts, or crawl a collection of servers looking for an old user to remove. Such a scenario requires specific rules for when and how tasks are executed.

This document explores Fabric’s execution model, including the main execution loop, how to define host lists, how connections are made, and so forth.

Execution strategy

Fabric defaults to a single, serial execution method, though there is an alternative parallel mode available as of Fabric 1.3 (see Parallel execution). This default behavior is as follows:

  • A list of tasks is created. Currently this list is simply the arguments given to fab, preserving the order given.
  • For each task, a task-specific host list is generated from various sources (see How host lists are constructed below for details.)
  • The task list is walked through in order, and each task is run once per host in its host list.
  • Tasks with no hosts in their host list are considered local-only, and will always run once and only once.

Thus, given the following fabfile:

from fabric.api import run, env

env.hosts = ['host1', 'host2']

def taskA():
    run('ls')

def taskB():
    run('whoami')

and the following invocation:

$ fab taskA taskB

you will see that Fabric performs the following:

  • taskA executed on host1
  • taskA executed on host2
  • taskB executed on host1
  • taskB executed on host2

While this approach is simplistic, it allows for a straightforward composition of task functions, and (unlike tools which push the multi-host functionality down to the individual function calls) enables shell script-like logic where you may introspect the output or return code of a given command and decide what to do next.

Defining tasks

For details on what constitutes a Fabric task and how to organize them, please see Defining tasks.

Defining host lists

Unless you’re using Fabric as a simple build system (which is possible, but not the primary use-case) having tasks won’t do you any good without the ability to specify remote hosts on which to execute them. There are a number of ways to do so, with scopes varying from global to per-task, and it’s possible mix and match as needed.

Hosts

Hosts, in this context, refer to what are also called “host strings”: Python strings specifying a username, hostname and port combination, in the form of username@hostname:port. User and/or port (and the associated @ or :) may be omitted, and will be filled by the executing user’s local username, and/or port 22, respectively. Thus, admin@foo.com:222, deploy@website and nameserver1 could all be valid host strings.

IPv6 address notation is also supported, for example ::1, [::1]:1222, user@2001:db8::1 or user@[2001:db8::1]:1222. Square brackets are necessary only to separate the address from the port number. If no port number is used, the brackets are optional. Also if host string is specified via command-line argument, it may be necessary to escape brackets in some shells.

注解

The user/hostname split occurs at the last @ found, so e.g. email address usernames are valid and will be parsed correctly.

During execution, Fabric normalizes the host strings given and then stores each part (username/hostname/port) in the environment dictionary, for both its use and for tasks to reference if the need arises. See 环境字典 env for details.

Roles

Host strings map to single hosts, but sometimes it’s useful to arrange hosts in groups. Perhaps you have a number of Web servers behind a load balancer and want to update all of them, or want to run a task on “all client servers”. Roles provide a way of defining strings which correspond to lists of host strings, and can then be specified instead of writing out the entire list every time.

This mapping is defined as a dictionary, env.roledefs, which must be modified by a fabfile in order to be used. A simple example:

from fabric.api import env

env.roledefs['webservers'] = ['www1', 'www2', 'www3']

Since env.roledefs is naturally empty by default, you may also opt to re-assign to it without fear of losing any information (provided you aren’t loading other fabfiles which also modify it, of course):

from fabric.api import env

env.roledefs = {
    'web': ['www1', 'www2', 'www3'],
    'dns': ['ns1', 'ns2']
}

In addition to list/iterable object types, the values in env.roledefs may be callables, and will thus be called when looked up when tasks are run instead of at module load time. (For example, you could connect to remote servers to obtain role definitions, and not worry about causing delays at fabfile load time when calling e.g. fab --list.)

Use of roles is not required in any way – it’s simply a convenience in situations where you have common groupings of servers.

在 0.9.2 版更改: Added ability to use callables as roledefs values.

How host lists are constructed

There are a number of ways to specify host lists, either globally or per-task, and generally these methods override one another instead of merging together (though this may change in future releases.) Each such method is typically split into two parts, one for hosts and one for roles.

Globally, via env

The most common method of setting hosts or roles is by modifying two key-value pairs in the environment dictionary, env: hosts and roles. The value of these variables is checked at runtime, while constructing each tasks’s host list.

Thus, they may be set at module level, which will take effect when the fabfile is imported:

from fabric.api import env, run

env.hosts = ['host1', 'host2']

def mytask():
    run('ls /var/www')

Such a fabfile, run simply as fab mytask, will run mytask on host1 followed by host2.

Since the env vars are checked for each task, this means that if you have the need, you can actually modify env in one task and it will affect all following tasks:

from fabric.api import env, run

def set_hosts():
    env.hosts = ['host1', 'host2']

def mytask():
    run('ls /var/www')

When run as fab set_hosts mytask, set_hosts is a “local” task – its own host list is empty – but mytask will again run on the two hosts given.

注解

This technique used to be a common way of creating fake “roles”, but is less necessary now that roles are fully implemented. It may still be useful in some situations, however.

Alongside env.hosts is env.roles (not to be confused with env.roledefs!) which, if given, will be taken as a list of role names to look up in env.roledefs.

Globally, via the command line

In addition to modifying env.hosts, env.roles, and env.exclude_hosts at the module level, you may define them by passing comma-separated string arguments to the command-line switches --hosts/-H and --roles/-R, e.g.:

$ fab -H host1,host2 mytask

Such an invocation is directly equivalent to env.hosts = ['host1', 'host2'] – the argument parser knows to look for these arguments and will modify env at parse time.

注解

It’s possible, and in fact common, to use these switches to set only a single host or role. Fabric simply calls string.split(',') on the given string, so a string with no commas turns into a single-item list.

It is important to know that these command-line switches are interpreted before your fabfile is loaded: any reassignment to env.hosts or env.roles in your fabfile will overwrite them.

If you wish to nondestructively merge the command-line hosts with your fabfile-defined ones, make sure your fabfile uses env.hosts.extend() instead:

from fabric.api import env, run

env.hosts.extend(['host3', 'host4'])

def mytask():
    run('ls /var/www')

When this fabfile is run as fab -H host1,host2 mytask, env.hosts will then contain ['host1', 'host2', 'host3', 'host4'] at the time that mytask is executed.

注解

env.hosts is simply a Python list object – so you may use env.hosts.append() or any other such method you wish.

Per-task, via the command line

Globally setting host lists only works if you want all your tasks to run on the same host list all the time. This isn’t always true, so Fabric provides a few ways to be more granular and specify host lists which apply to a single task only. The first of these uses task arguments.

As outlined in fab options and arguments, it’s possible to specify per-task arguments via a special command-line syntax. In addition to naming actual arguments to your task function, this may be used to set the host, hosts, role or roles “arguments”, which are interpreted by Fabric when building host lists (and removed from the arguments passed to the task itself.)

注解

Since commas are already used to separate task arguments from one another, semicolons must be used in the hosts or roles arguments to delineate individual host strings or role names. Furthermore, the argument must be quoted to prevent your shell from interpreting the semicolons.

Take the below fabfile, which is the same one we’ve been using, but which doesn’t define any host info at all:

from fabric.api import run

def mytask():
    run('ls /var/www')

To specify per-task hosts for mytask, execute it like so:

$ fab mytask:hosts="host1;host2"

This will override any other host list and ensure mytask always runs on just those two hosts.

Per-task, via decorators

If a given task should always run on a predetermined host list, you may wish to specify this in your fabfile itself. This can be done by decorating a task function with the ~fabric.decorators.hosts or ~fabric.decorators.roles decorators. These decorators take a variable argument list, like so:

from fabric.api import hosts, run

@hosts('host1', 'host2')
def mytask():
    run('ls /var/www')

They will also take an single iterable argument, e.g.:

my_hosts = ('host1', 'host2')
@hosts(my_hosts)
def mytask():
    # ...

When used, these decorators override any checks of env for that particular task’s host list (though env is not modified in any way – it is simply ignored.) Thus, even if the above fabfile had defined env.hosts or the call to fab uses --hosts/-H, mytask would still run on a host list of ['host1', 'host2'].

However, decorator host lists do not override per-task command-line arguments, as given in the previous section.

Order of precedence

We’ve been pointing out which methods of setting host lists trump the others, as we’ve gone along. However, to make things clearer, here’s a quick breakdown:

  • Per-task, command-line host lists (fab mytask:host=host1) override absolutely everything else.
  • Per-task, decorator-specified host lists (@hosts('host1')) override the env variables.
  • Globally specified host lists set in the fabfile (env.hosts = ['host1']) can override such lists set on the command-line, but only if you’re not careful (or want them to.)
  • Globally specified host lists set on the command-line (--hosts=host1) will initialize the env variables, but that’s it.

This logic may change slightly in the future to be more consistent (e.g. having --hosts somehow take precedence over env.hosts in the same way that command-line per-task lists trump in-code ones) but only in a backwards-incompatible release.

Combining host lists

There is no “unionizing” of hosts between the various sources mentioned in How host lists are constructed. If env.hosts is set to ['host1', 'host2', 'host3'], and a per-function (e.g. via ~fabric.decorators.hosts) host list is set to just ['host2', 'host3'], that function will not execute on host1, because the per-task decorator host list takes precedence.

However, for each given source, if both roles and hosts are specified, they will be merged together into a single host list. Take, for example, this fabfile where both of the decorators are used:

from fabric.api import env, hosts, roles, run

env.roledefs = {'role1': ['b', 'c']}

@hosts('a', 'b')
@roles('role1')
def mytask():
    run('ls /var/www')

Assuming no command-line hosts or roles are given when mytask is executed, this fabfile will call mytask on a host list of ['a', 'b', 'c'] – the union of role1 and the contents of the ~fabric.decorators.hosts call.

Host list deduplication

By default, to support Combining host lists, Fabric deduplicates the final host list so any given host string is only present once. However, this prevents explicit/intentional running of a task multiple times on the same target host, which is sometimes useful.

To turn off deduplication, set env.dedupe_hosts to False.

Excluding specific hosts

At times, it is useful to exclude one or more specific hosts, e.g. to override a few bad or otherwise undesirable hosts which are pulled in from a role or an autogenerated host list.

注解

As of Fabric 1.4, you may wish to use skip_bad_hosts instead, which automatically skips over any unreachable hosts.

Host exclusion may be accomplished globally with --exclude-hosts/-x:

$ fab -R myrole -x host2,host5 mytask

If myrole was defined as ['host1', 'host2', ..., 'host15'], the above invocation would run with an effective host list of ['host1', 'host3', 'host4', 'host6', ..., 'host15'].

注解

Using this option does not modify env.hosts – it only causes the main execution loop to skip the requested hosts.

Exclusions may be specified per-task by using an extra exclude_hosts kwarg, which is implemented similarly to the abovementioned hosts and roles per-task kwargs, in that it is stripped from the actual task invocation. This example would have the same result as the global exclude above:

$ fab mytask:roles=myrole,exclude_hosts="host2;host5"

Note that the host list is semicolon-separated, just as with the hosts per-task argument.

Combining exclusions

Host exclusion lists, like host lists themselves, are not merged together across the different “levels” they can be declared in. For example, a global -x option will not affect a per-task host list set with a decorator or keyword argument, nor will per-task exclude_hosts keyword arguments affect a global -H list.

There is one minor exception to this rule, namely that CLI-level keyword arguments (mytask:exclude_hosts=x,y) will be taken into account when examining host lists set via @hosts or @roles. Thus a task function decorated with @hosts('host1', 'host2') executed as fab taskname:exclude_hosts=host2 will only run on host1.

As with the host list merging, this functionality is currently limited (partly to keep the implementation simple) and may be expanded in future releases.

Intelligently executing tasks with execute

1.3 新版功能.

Most of the information here involves “top level” tasks executed via fab, such as the first example where we called fab taskA taskB. However, it’s often convenient to wrap up multi-task invocations like this into their own, “meta” tasks.

Prior to Fabric 1.3, this had to be done by hand, as outlined in Library Use. Fabric’s design eschews magical behavior, so simply calling a task function does not take into account decorators such as ~fabric.decorators.roles.

New in Fabric 1.3 is the ~fabric.tasks.execute helper function, which takes a task object or name as its first argument. Using it is effectively the same as calling the given task from the command line: all the rules given above in How host lists are constructed apply. (The hosts and roles keyword arguments to ~fabric.tasks.execute are analogous to CLI per-task arguments, including how they override all other host/role-setting methods.)

As an example, here’s a fabfile defining two stand-alone tasks for deploying a Web application:

from fabric.api import run, roles

env.roledefs = {
    'db': ['db1', 'db2'],
    'web': ['web1', 'web2', 'web3'],
}

@roles('db')
def migrate():
    # Database stuff here.
    pass

@roles('web')
def update():
    # Code updates here.
    pass

In Fabric <=1.2, the only way to ensure that migrate runs on the DB servers and that update runs on the Web servers (short of manual env.host_string manipulation) was to call both as top level tasks:

$ fab migrate update

Fabric >=1.3 can use ~fabric.tasks.execute to set up a meta-task. Update the import line like so:

from fabric.api import run, roles, execute

and append this to the bottom of the file:

def deploy():
    execute(migrate)
    execute(update)

That’s all there is to it; the ~fabric.decorators.roles decorators will be honored as expected, resulting in the following execution sequence:

  • migrate on db1
  • migrate on db2
  • update on web1
  • update on web2
  • update on web3

警告

This technique works because tasks that themselves have no host list (this includes the global host list settings) only run one time. If used inside a “regular” task that is going to run on multiple hosts, calls to ~fabric.tasks.execute will also run multiple times, resulting in multiplicative numbers of subtask calls – be careful!

If you would like your execute calls to only be called once, you may use the ~fabric.decorators.runs_once decorator.

参见

~fabric.tasks.execute, ~fabric.decorators.runs_once

Leveraging execute to access multi-host results

In nontrivial Fabric runs, especially parallel ones, you may want to gather up a bunch of per-host result values at the end - e.g. to present a summary table, perform calculations, etc.

It’s not possible to do this in Fabric’s default “naive” mode (one where you rely on Fabric looping over host lists on your behalf), but with .execute it’s pretty easy. Simply switch from calling the actual work-bearing task, to calling a “meta” task which takes control of execution with .execute:

from fabric.api import task, execute, run, runs_once

@task
def workhorse():
    return run("get my infos")

@task
@runs_once
def go():
    results = execute(workhorse)
    print results

In the above, workhorse can do any Fabric stuff at all – it’s literally your old “naive” task – except that it needs to return something useful.

go is your new entry point (to be invoked as fab go, or whatnot) and its job is to take the results dictionary from the .execute call and do whatever you need with it. Check the API docs for details on the structure of that return value.

Using execute with dynamically-set host lists

A common intermediate-to-advanced use case for Fabric is to parameterize lookup of one’s target host list at runtime (when use of Roles does not suffice). execute can make this extremely simple, like so:

from fabric.api import run, execute, task

# For example, code talking to an HTTP API, or a database, or ...
from mylib import external_datastore

# This is the actual algorithm involved. It does not care about host
# lists at all.
def do_work():
    run("something interesting on a host")

# This is the user-facing task invoked on the command line.
@task
def deploy(lookup_param):
    # This is the magic you don't get with @hosts or @roles.
    # Even lazy-loading roles require you to declare available roles
    # beforehand. Here, the sky is the limit.
    host_list = external_datastore.query(lookup_param)
    # Put this dynamically generated host list together with the work to be
    # done.
    execute(do_work, hosts=host_list)

For example, if external_datastore was a simplistic “look up hosts by tag in a database” service, and you wanted to run a task on all hosts tagged as being related to your application stack, you might call the above like this:

$ fab deploy:app

But wait! A data migration has gone awry on the DB servers. Let’s fix up our migration code in our source repo, and deploy just the DB boxes again:

$ fab deploy:db

This use case looks similar to Fabric’s roles, but has much more potential, and is by no means limited to a single argument. Define the task however you wish, query your external data store in whatever way you need – it’s just Python.

The alternate approach

Similar to the above, but using fab‘s ability to call multiple tasks in succession instead of an explicit execute call, is to mutate env.hosts in a host-list lookup task and then call do_work in the same session:

from fabric.api import run, task

from mylib import external_datastore

# Marked as a publicly visible task, but otherwise unchanged: still just
# "do the work, let somebody else worry about what hosts to run on".
@task
def do_work():
    run("something interesting on a host")

@task
def set_hosts(lookup_param):
    # Update env.hosts instead of calling execute()
    env.hosts = external_datastore.query(lookup_param)

Then invoke like so:

$ fab set_hosts:app do_work

One benefit of this approach over the previous one is that you can replace do_work with any other “workhorse” task:

$ fab set_hosts:db snapshot
$ fab set_hosts:cassandra,cluster2 repair_ring
$ fab set_hosts:redis,environ=prod status

Failure handling

Once the task list has been constructed, Fabric will start executing them as outlined in Execution strategy, until all tasks have been run on the entirety of their host lists. However, Fabric defaults to a “fail-fast” behavior pattern: if anything goes wrong, such as a remote program returning a nonzero return value or your fabfile’s Python code encountering an exception, execution will halt immediately.

This is typically the desired behavior, but there are many exceptions to the rule, so Fabric provides env.warn_only, a Boolean setting. It defaults to False, meaning an error condition will result in the program aborting immediately. However, if env.warn_only is set to True at the time of failure – with, say, the ~fabric.context_managers.settings context manager – Fabric will emit a warning message but continue executing.

Connections

fab itself doesn’t actually make any connections to remote hosts. Instead, it simply ensures that for each distinct run of a task on one of its hosts, the env var env.host_string is set to the right value. Users wanting to leverage Fabric as a library may do so manually to achieve similar effects (though as of Fabric 1.3, using ~fabric.tasks.execute is preferred and more powerful.)

env.host_string is (as the name implies) the “current” host string, and is what Fabric uses to determine what connections to make (or re-use) when network-aware functions are run. Operations like ~fabric.operations.run or ~fabric.operations.put use env.host_string as a lookup key in a shared dictionary which maps host strings to SSH connection objects.

注解

The connections dictionary (currently located at fabric.state.connections) acts as a cache, opting to return previously created connections if possible in order to save some overhead, and creating new ones otherwise.

Lazy connections

Because connections are driven by the individual operations, Fabric will not actually make connections until they’re necessary. Take for example this task which does some local housekeeping prior to interacting with the remote server:

from fabric.api import *

@hosts('host1')
def clean_and_upload():
    local('find assets/ -name "*.DS_Store" -exec rm '{}' \;')
    local('tar czf /tmp/assets.tgz assets/')
    put('/tmp/assets.tgz', '/tmp/assets.tgz')
    with cd('/var/www/myapp/'):
        run('tar xzf /tmp/assets.tgz')

What happens, connection-wise, is as follows:

  1. The two ~fabric.operations.local calls will run without making any network connections whatsoever;
  2. ~fabric.operations.put asks the connection cache for a connection to host1;
  3. The connection cache fails to find an existing connection for that host string, and so creates a new SSH connection, returning it to ~fabric.operations.put;
  4. ~fabric.operations.put uploads the file through that connection;
  5. Finally, the ~fabric.operations.run call asks the cache for a connection to that same host string, and is given the existing, cached connection for its own use.

Extrapolating from this, you can also see that tasks which don’t use any network-borne operations will never actually initiate any connections (though they will still be run once for each host in their host list, if any.)

Closing connections

Fabric’s connection cache never closes connections itself – it leaves this up to whatever is using it. The fab tool does this bookkeeping for you: it iterates over all open connections and closes them just before it exits (regardless of whether the tasks failed or not.)

Library users will need to ensure they explicitly close all open connections before their program exits. This can be accomplished by calling ~fabric.network.disconnect_all at the end of your script.

注解

~fabric.network.disconnect_all may be moved to a more public location in the future; we’re still working on making the library aspects of Fabric more solidified and organized.

Multiple connection attempts and skipping bad hosts

As of Fabric 1.4, multiple attempts may be made to connect to remote servers before aborting with an error: Fabric will try connecting env.connection_attempts times before giving up, with a timeout of env.timeout seconds each time. (These currently default to 1 try and 10 seconds, to match previous behavior, but they may be safely changed to whatever you need.)

Furthermore, even total failure to connect to a server is no longer an absolute hard stop: set env.skip_bad_hosts to True and in most situations (typically initial connections) Fabric will simply warn and continue, instead of aborting.

1.4 新版功能.

Password management

Fabric maintains an in-memory, two-tier password cache to help remember your login and sudo passwords in certain situations; this helps avoid tedious re-entry when multiple systems share the same password [1], or if a remote system’s sudo configuration doesn’t do its own caching.

The first layer is a simple default or fallback password cache, env.password (which may also be set at the command line via --password or --initial-password-prompt). This env var stores a single password which (if non-empty) will be tried in the event that the host-specific cache (see below) has no entry for the current host string.

env.passwords (plural!) serves as a per-user/per-host cache, storing the most recently entered password for every unique user/host/port combination. Due to this cache, connections to multiple different users and/or hosts in the same session will only require a single password entry for each. (Previous versions of Fabric used only the single, default password cache and thus required password re-entry every time the previously entered password became invalid.)

Depending on your configuration and the number of hosts your session will connect to, you may find setting either or both of these env vars to be useful. However, Fabric will automatically fill them in as necessary without any additional configuration.

Specifically, each time a password prompt is presented to the user, the value entered is used to update both the single default password cache, and the cache value for the current value of env.host_string.

[1]We highly recommend the use of SSH key-based access instead of relying on homogeneous password setups, as it’s significantly more secure.

Leveraging native SSH config files

Command-line SSH clients (such as the one provided by OpenSSH) make use of a specific configuration format typically known as ssh_config, and will read from a file in the platform-specific location $HOME/.ssh/config (or an arbitrary path given to --ssh-config-path/env.ssh_config_path.) This file allows specification of various SSH options such as default or per-host usernames, hostname aliases, and toggling other settings (such as whether to use agent forwarding.)

Fabric’s SSH implementation allows loading a subset of these options from one’s actual SSH config file, should it exist. This behavior is not enabled by default (in order to be backwards compatible) but may be turned on by setting env.use_ssh_config to True at the top of your fabfile.

If enabled, the following SSH config directives will be loaded and honored by Fabric:

  • User and Port will be used to fill in the appropriate connection parameters when not otherwise specified, in the following fashion:

    • Globally specified User/Port will be used in place of the current defaults (local username and 22, respectively) if the appropriate env vars are not set.
    • However, if env.user/env.port are set, they override global User/Port values.
    • User/port values in the host string itself (e.g. hostname:222) will override everything, including any ssh_config values.
  • HostName can be used to replace the given hostname, just like with regular ssh. So a Host foo entry specifying HostName example.com will allow you to give Fabric the hostname 'foo' and have that expanded into 'example.com' at connection time.

  • IdentityFile will extend (not replace) env.key_filename.

  • ForwardAgent will augment env.forward_agent in an “OR” manner: if either is set to a positive value, agent forwarding will be enabled.

  • ProxyCommand will trigger use of a proxy command for host connections, just as with regular ssh.

    注解

    If all you want to do is bounce SSH traffic off a gateway, you may find env.gateway to be a more efficient connection method (which will also honor more Fabric-level settings) than the typical ssh gatewayhost nc %h %p method of using ProxyCommand as a gateway.

    注解

    If your SSH config file contains ProxyCommand directives and you have set env.gateway to a non-None value, env.gateway will take precedence and the ProxyCommand will be ignored.

    If one has a pre-created SSH config file, rationale states it will be easier for you to modify env.gateway (e.g. via ~fabric.context_managers.settings) than to work around your conf file’s contents entirely.

fab options and arguments

The most common method for utilizing Fabric is via its command-line tool, fab, which should have been placed on your shell’s executable path when Fabric was installed. fab tries hard to be a good Unix citizen, using a standard style of command-line switches, help output, and so forth.

Basic use

In its most simple form, fab may be called with no options at all, and with one or more arguments, which should be task names, e.g.:

$ fab task1 task2

As detailed in 入门导览 and Execution model, this will run task1 followed by task2, assuming that Fabric was able to find a fabfile nearby containing Python functions with those names.

However, it’s possible to expand this simple usage into something more flexible, by using the provided options and/or passing arguments to individual tasks.

Arbitrary remote shell commands

0.9.2 新版功能.

Fabric leverages a lesser-known command line convention and may be called in the following manner:

$ fab [options] -- [shell command]

where everything after the -- is turned into a temporary ~fabric.operations.run call, and is not parsed for fab options. If you’ve defined a host list at the module level or on the command line, this usage will act like a one-line anonymous task.

For example, let’s say you just wanted to get the kernel info for a bunch of systems; you could do this:

$ fab -H system1,system2,system3 -- uname -a

which would be literally equivalent to the following fabfile:

from fabric.api import run

def anonymous():
    run("uname -a")

as if it were executed thusly:

$ fab -H system1,system2,system3 anonymous

Most of the time you will want to just write out the task in your fabfile (anything you use once, you’re likely to use again) but this feature provides a handy, fast way to quickly dash off an SSH-borne command while leveraging your fabfile’s connection settings.

Command-line options

A quick overview of all possible command line options can be found via fab --help. If you’re looking for details on a specific option, we go into detail below.

注解

fab uses Python’s optparse library, meaning that it honors typical Linux or GNU style short and long options, as well as freely mixing options and arguments. E.g. fab task1 -H hostname task2 -i path/to/keyfile is just as valid as the more straightforward fab -H hostname -i path/to/keyfile task1 task2.

-a, --no_agent

Sets env.no_agent to True, forcing our SSH layer not to talk to the SSH agent when trying to unlock private key files.

0.9.1 新版功能.

-A, --forward-agent

Sets env.forward_agent to True, enabling agent forwarding.

1.4 新版功能.

--abort-on-prompts

Sets env.abort_on_prompts to True, forcing Fabric to abort whenever it would prompt for input.

1.1 新版功能.

-c RCFILE, --config=RCFILE

Sets env.rcfile to the given file path, which Fabric will try to load on startup and use to update environment variables.

-d COMMAND, --display=COMMAND

Prints the entire docstring for the given task, if there is one. Does not currently print out the task’s function signature, so descriptive docstrings are a good idea. (They’re always a good idea, of course – just moreso here.)

--connection-attempts=M, -n M

Set number of times to attempt connections. Sets env.connection_attempts.

1.4 新版功能.

-D, --disable-known-hosts

Sets env.disable_known_hosts to True, preventing Fabric from loading the user’s SSH known_hosts file.

-f FABFILE, --fabfile=FABFILE

The fabfile name pattern to search for (defaults to fabfile.py), or alternately an explicit file path to load as the fabfile (e.g. /path/to/my/fabfile.py.)

-F LIST_FORMAT, --list-format=LIST_FORMAT

Allows control over the output format of --list. short is equivalent to --shortlist, normal is the same as simply omitting this option entirely (i.e. the default), and nested prints out a nested namespace tree.

1.1 新版功能.

-g HOST, --gateway=HOST

Sets env.gateway to HOST host string.

1.5 新版功能.

-h, --help

Displays a standard help message, with all possible options and a brief overview of what they do, then exits.

--hide=LEVELS

A comma-separated list of output levels to hide by default.

-H HOSTS, --hosts=HOSTS

Sets env.hosts to the given comma-delimited list of host strings.

-x HOSTS, --exclude-hosts=HOSTS

Sets env.exclude_hosts to the given comma-delimited list of host strings to then keep out of the final host list.

1.1 新版功能.

-i KEY_FILENAME

When set to a file path, will load the given file as an SSH identity file (usually a private key.) This option may be repeated multiple times. Sets (or appends to) env.key_filename.

-I, --initial-password-prompt

Forces a password prompt at the start of the session (after fabfile load and option parsing, but before executing any tasks) in order to pre-fill env.password.

This is useful for fire-and-forget runs (especially parallel sessions, in which runtime input is not possible) when setting the password via --password or by setting env.password in your fabfile, is undesirable.

注解

The value entered into this prompt will overwrite anything supplied via env.password at module level, or via --password.

-k

Sets env.no_keys to True, forcing the SSH layer to not look for SSH private key files in one’s home directory.

0.9.1 新版功能.

--keepalive=KEEPALIVE

Sets env.keepalive to the given (integer) value, specifying an SSH keepalive interval.

1.1 新版功能.

--linewise

Forces output to be buffered line-by-line instead of byte-by-byte. Often useful or required for parallel execution.

1.3 新版功能.

-l, --list

Imports a fabfile as normal, but then prints a list of all discovered tasks and exits. Will also print the first line of each task’s docstring, if it has one, next to it (truncating if necessary.)

在 0.9.1 版更改: Added docstring to output.

-p PASSWORD, --password=PASSWORD

Sets env.password to the given string; it will then be used as the default password when making SSH connections or calling the sudo program.

-P, --parallel

Sets env.parallel to True, causing tasks to run in parallel.

1.3 新版功能.

--no-pty

Sets env.always_use_pty to False, causing all ~fabric.operations.run/~fabric.operations.sudo calls to behave as if one had specified pty=False.

1.0 新版功能.

-r, --reject-unknown-hosts

Sets env.reject_unknown_hosts to True, causing Fabric to abort when connecting to hosts not found in the user’s SSH known_hosts file.

-R ROLES, --roles=ROLES

Sets env.roles to the given comma-separated list of role names.

--set KEY=VALUE,...

Allows you to set default values for arbitrary Fabric env vars. Values set this way have a low precedence – they will not override more specific env vars which are also specified on the command line. E.g.:

fab --set password=foo --password=bar

will result in env.password = 'bar', not 'foo'

Multiple KEY=VALUE pairs may be comma-separated, e.g. fab --set var1=val1,var2=val2.

Other than basic string values, you may also set env vars to True by omitting the =VALUE (e.g. fab --set KEY), and you may set values to the empty string (and thus a False-equivalent value) by keeping the equals sign, but omitting VALUE (e.g. fab --set KEY=.)

1.4 新版功能.

-s SHELL, --shell=SHELL

Sets env.shell to the given string, overriding the default shell wrapper used to execute remote commands.

--shortlist

Similar to --list, but without any embellishment, just task names separated by newlines with no indentation or docstrings.

0.9.2 新版功能.

参见

--list

--show=LEVELS

A comma-separated list of output levels to be added to those that are shown by default.

参见

~fabric.operations.run, ~fabric.operations.sudo

--ssh-config-path

Sets env.ssh_config_path.

1.4 新版功能.

--skip-bad-hosts

Sets env.skip_bad_hosts, causing Fabric to skip unavailable hosts.

1.4 新版功能.

--timeout=N, -t N

Set connection timeout in seconds. Sets env.timeout.

1.4 新版功能.

--command-timeout=N, -T N

Set remote command timeout in seconds. Sets env.command_timeout.

1.6 新版功能.

-u USER, --user=USER

Sets env.user to the given string; it will then be used as the default username when making SSH connections.

-V, --version

Displays Fabric’s version number, then exits.

-w, --warn-only

Sets env.warn_only to True, causing Fabric to continue execution even when commands encounter error conditions.

-z, --pool-size

Sets env.pool_size, which specifies how many processes to run concurrently during parallel execution.

1.3 新版功能.

Per-task arguments

The options given in Command-line options apply to the invocation of fab as a whole; even if the order is mixed around, options still apply to all given tasks equally. Additionally, since tasks are just Python functions, it’s often desirable to pass in arguments to them at runtime.

Answering both these needs is the concept of “per-task arguments”, which is a special syntax you can tack onto the end of any task name:

  • Use a colon (:) to separate the task name from its arguments;
  • Use commas (,) to separate arguments from one another (may be escaped by using a backslash, i.e. \,);
  • Use equals signs (=) for keyword arguments, or omit them for positional arguments. May also be escaped with backslashes.

Additionally, since this process involves string parsing, all values will end up as Python strings, so plan accordingly. (We hope to improve upon this in future versions of Fabric, provided an intuitive syntax can be found.)

For example, a “create a new user” task might be defined like so (omitting most of the actual logic for brevity):

def new_user(username, admin='no', comment="No comment provided"):
    print("New User (%s): %s" % (username, comment))
    pass

You can specify just the username:

$ fab new_user:myusername

Or treat it as an explicit keyword argument:

$ fab new_user:username=myusername

If both args are given, you can again give them as positional args:

$ fab new_user:myusername,yes

Or mix and match, just like in Python:

$ fab new_user:myusername,admin=yes

The print call above is useful for illustrating escaped commas, like so:

$ fab new_user:myusername,admin=no,comment='Gary\, new developer (starts Monday)'

注解

Quoting the backslash-escaped comma is required, as not doing so will cause shell syntax errors. Quotes are also needed whenever an argument involves other shell-related characters such as spaces.

All of the above are translated into the expected Python function calls. For example, the last call above would become:

>>> new_user('myusername', admin='yes', comment='Gary, new developer (starts Monday)')
Roles and hosts

As mentioned in the section on task execution, there are a handful of per-task keyword arguments (host, hosts, role and roles) which do not actually map to the task functions themselves, but are used for setting per-task host and/or role lists.

These special kwargs are removed from the args/kwargs sent to the task function itself; this is so that you don’t run into TypeErrors if your task doesn’t define the kwargs in question. (It also means that if you do define arguments with these names, you won’t be able to specify them in this manner – a regrettable but necessary sacrifice.)

注解

If both the plural and singular forms of these kwargs are given, the value of the plural will win out and the singular will be discarded.

When using the plural form of these arguments, one must use semicolons (;) since commas are already being used to separate arguments from one another. Furthermore, since your shell is likely to consider semicolons a special character, you’ll want to quote the host list string to prevent shell interpretation, e.g.:

$ fab new_user:myusername,hosts="host1;host2"

Again, since the hosts kwarg is removed from the argument list sent to the new_user task function, the actual Python invocation would be new_user('myusername'), and the function would be executed on a host list of ['host1', 'host2'].

Settings files

Fabric currently honors a simple user settings file, or fabricrc (think bashrc but for fab) which should contain one or more key-value pairs, one per line. These lines will be subject to string.split('='), and thus can currently only be used to specify string settings. Any such key-value pairs will be used to update env when fab runs, and is loaded prior to the loading of any fabfile.

By default, Fabric looks for ~/.fabricrc, and this may be overridden by specifying the -c flag to fab.

For example, if your typical SSH login username differs from your workstation username, and you don’t want to modify env.user in a project’s fabfile (possibly because you expect others to use it as well) you could write a fabricrc file like so:

user = ssh_user_name

Then, when running fab, your fabfile would load up with env.user set to 'ssh_user_name'. Other users of that fabfile could do the same, allowing the fabfile itself to be cleanly agnostic regarding the default username.

Fabfile construction and use

This document contains miscellaneous sections about fabfiles, both how to best write them, and how to use them once written.

Fabfile discovery

Fabric is capable of loading Python modules (e.g. fabfile.py) or packages (e.g. a fabfile/ directory containing an __init__.py). By default, it looks for something named (to Python’s import machinery) fabfile - so either fabfile/ or fabfile.py.

The fabfile discovery algorithm searches in the invoking user’s current working directory or any parent directories. Thus, it is oriented around “project” use, where one keeps e.g. a fabfile.py at the root of a source code tree. Such a fabfile will then be discovered no matter where in the tree the user invokes fab.

The specific name to be searched for may be overridden on the command-line with the -f option, or by adding a fabricrc line which sets the value of fabfile. For example, if you wanted to name your fabfile fab_tasks.py, you could create such a file and then call fab -f fab_tasks.py <task name>, or add fabfile = fab_tasks.py to ~/.fabricrc.

If the given fabfile name contains path elements other than a filename (e.g. ../fabfile.py or /dir1/dir2/custom_fabfile) it will be treated as a file path and directly checked for existence without any sort of searching. When in this mode, tilde-expansion will be applied, so one may refer to e.g. ~/personal_fabfile.py.

注解

Fabric does a normal import (actually an __import__) of your fabfile in order to access its contents – it does not do any eval-ing or similar. In order for this to work, Fabric temporarily adds the found fabfile’s containing folder to the Python load path (and removes it immediately afterwards.)

在 0.9.2 版更改: The ability to load package fabfiles.

Importing Fabric

Because Fabric is just Python, you can import its components any way you want. However, for the purposes of encapsulation and convenience (and to make life easier for Fabric’s packaging script) Fabric’s public API is maintained in the fabric.api module.

All of Fabric’s Operations, Context Managers, Decorators and Utils are included in this module as a single, flat namespace. This enables a very simple and consistent interface to Fabric within your fabfiles:

from fabric.api import *

# call run(), sudo(), etc etc

This is not technically best practices (for a number of reasons) and if you’re only using a couple of Fab API calls, it is probably a good idea to explicitly from fabric.api import env, run or similar. However, in most nontrivial fabfiles, you’ll be using all or most of the API, and the star import:

from fabric.api import *

will be a lot easier to write and read than:

from fabric.api import abort, cd, env, get, hide, hosts, local, prompt, \
    put, require, roles, run, runs_once, settings, show, sudo, warn

so in this case we feel pragmatism overrides best practices.

Defining tasks and importing callables

For important information on what exactly Fabric will consider as a task when it loads your fabfile, as well as notes on how best to import other code, please see Defining tasks in the Execution model documentation.

Interaction with remote programs

Fabric’s primary operations, ~fabric.operations.run and ~fabric.operations.sudo, are capable of sending local input to the remote end, in a manner nearly identical to the ssh program. For example, programs which display password prompts (e.g. a database dump utility, or changing a user’s password) will behave just as if you were interacting with them directly.

However, as with ssh itself, Fabric’s implementation of this feature is subject to a handful of limitations which are not always intuitive. This document discusses such issues in detail.

注解

Readers unfamiliar with the basics of Unix stdout and stderr pipes, and/or terminal devices, may wish to visit the Wikipedia pages for Unix pipelines and Pseudo terminals respectively.

Combining stdout and stderr

The first issue to be aware of is that of the stdout and stderr streams, and why they are separated or combined as needed.

Buffering

Fabric 0.9.x and earlier, and Python itself, buffer output on a line-by-line basis: text is not printed to the user until a newline character is found. This works fine in most situations but becomes problematic when one needs to deal with partial-line output such as prompts.

注解

Line-buffered output can make programs appear to halt or freeze for no reason, as prompts print out text without a newline, waiting for the user to enter their input and press Return.

Newer Fabric versions buffer both input and output on a character-by-character basis in order to make interaction with prompts possible. This has the convenient side effect of enabling interaction with complex programs utilizing the “curses” libraries or which otherwise redraw the screen (think top).

Crossing the streams

Unfortunately, printing to stderr and stdout simultaneously (as many programs do) means that when the two streams are printed independently one byte at a time, they can become garbled or meshed together. While this can sometimes be mitigated by line-buffering one of the streams and not the other, it’s still a serious issue.

To solve this problem, Fabric uses a setting in our SSH layer which merges the two streams at a low level and causes output to appear more naturally. This setting is represented in Fabric as the combine_stderr env var and keyword argument, and is True by default.

Due to this default setting, output will appear correctly, but at the cost of an empty .stderr attribute on the return values of ~fabric.operations.run/~fabric.operations.sudo, as all output will appear to be stdout.

Conversely, users requiring a distinct stderr stream at the Python level and who aren’t bothered by garbled user-facing output (or who are hiding stdout and stderr from the command in question) may opt to set this to False as needed.

Pseudo-terminals

The other main issue to consider when presenting interactive prompts to users is that of echoing the user’s own input.

Echoes

Typical terminal applications or bona fide text terminals (e.g. when using a Unix system without a running GUI) present programs with a terminal device called a tty or pty (for pseudo-terminal). These automatically echo all text typed into them back out to the user (via stdout), as interaction without seeing what you had just typed would be difficult. Terminal devices are also able to conditionally turn off echoing, allowing secure password prompts.

However, it’s possible for programs to be run without a tty or pty present at all (consider cron jobs, for example) and in this situation, any stdin data being fed to the program won’t be echoed. This is desirable for programs being run without any humans around, and it’s also Fabric’s old default mode of operation.

Fabric’s approach

Unfortunately, in the context of executing commands via Fabric, when no pty is present to echo a user’s stdin, Fabric must echo it for them. This is sufficient for many applications, but it presents problems for password prompts, which become insecure.

In the interests of security and meeting the principle of least surprise (insofar as users are typically expecting things to behave as they would when run in a terminal emulator), Fabric 1.0 and greater force a pty by default. With a pty enabled, Fabric simply allows the remote end to handle echoing or hiding of stdin and does not echo anything itself.

注解

In addition to allowing normal echo behavior, a pty also means programs that behave differently when attached to a terminal device will then do so. For example, programs that colorize output on terminals but not when run in the background will print colored output. Be wary of this if you inspect the return value of ~fabric.operations.run or ~fabric.operations.sudo!

For situations requiring the pty behavior turned off, the --no-pty command-line argument and always_use_pty env var may be used.

Combining the two

As a final note, keep in mind that use of pseudo-terminals effectively implies combining stdout and stderr – in much the same way as the combine_stderr setting does. This is because a terminal device naturally sends both stdout and stderr to the same place – the user’s display – thus making it impossible to differentiate between them.

However, at the Fabric level, the two groups of settings are distinct from one another and may be combined in various ways. The default is for both to be set to True; the other combinations are as follows:

  • run("cmd", pty=False, combine_stderr=True): will cause Fabric to echo all stdin itself, including passwords, as well as potentially altering cmd‘s behavior. Useful if cmd behaves undesirably when run under a pty and you’re not concerned about password prompts.
  • run("cmd", pty=False, combine_stderr=False): with both settings False, Fabric will echo stdin and won’t issue a pty – and this is highly likely to result in undesired behavior for all but the simplest commands. However, it is also the only way to access a distinct stderr stream, which is occasionally useful.
  • run("cmd", pty=True, combine_stderr=False): valid, but won’t really make much of a difference, as pty=True will still result in merged streams. May be useful for avoiding any edge case problems in combine_stderr (none are presently known).

Library Use

Fabric’s primary use case is via fabfiles and the fab tool, and this is reflected in much of the documentation. However, Fabric’s internals are written in such a manner as to be easily used without fab or fabfiles at all – this document will show you how.

There’s really only a couple of considerations one must keep in mind, when compared to writing a fabfile and using fab to run it: how connections are really made, and how disconnections occur.

Connections

We’ve documented how Fabric really connects to its hosts before, but it’s currently somewhat buried in the middle of the overall execution docs. Specifically, you’ll want to skip over to the Connections section and read it real quick. (You should really give that entire document a once-over, but it’s not absolutely required.)

As that section mentions, the key is simply that ~fabric.operations.run, ~fabric.operations.sudo and the other operations only look in one place when connecting: env.host_string. All of the other mechanisms for setting hosts are interpreted by the fab tool when it runs, and don’t matter when running as a library.

That said, most use cases where you want to marry a given task X and a given list of hosts Y can, as of Fabric 1.3, be handled with the ~fabric.tasks.execute function via execute(X, hosts=Y). Please see ~fabric.tasks.execute‘s documentation for details – manual host string manipulation should be rarely necessary.

Disconnecting

The other main thing that fab does for you is to disconnect from all hosts at the end of a session; otherwise, Python will sit around forever waiting for those network resources to be released.

Fabric 0.9.4 and newer have a function you can use to do this easily: ~fabric.network.disconnect_all. Simply make sure your code calls this when it terminates (typically in the finally clause of an outer try: finally statement – lest errors in your code prevent disconnections from happening!) and things ought to work pretty well.

If you’re on Fabric 0.9.3 or older, you can simply do this (disconnect_all just adds a bit of nice output to this logic):

from fabric.state import connections

for key in connections.keys():
    connections[key].close()
    del connections[key]

Final note

This document is an early draft, and may not cover absolutely every difference between fab use and library use. However, the above should highlight the largest stumbling blocks. When in doubt, note that in the Fabric source code, fabric/main.py contains the bulk of the extra work done by fab, and may serve as a useful reference.

Managing output

The fab tool is very verbose by default and prints out almost everything it can, including the remote end’s stderr and stdout streams, the command strings being executed, and so forth. While this is necessary in many cases in order to know just what’s going on, any nontrivial Fabric task will quickly become difficult to follow as it runs.

Output levels

To aid in organizing task output, Fabric output is grouped into a number of non-overlapping levels or groups, each of which may be turned on or off independently. This provides flexible control over what is displayed to the user.

注解

All levels, save for debug, are on by default.

Standard output levels

The standard, atomic output levels/groups are as follows:

  • status: Status messages, i.e. noting when Fabric is done running, if the user used a keyboard interrupt, or when servers are disconnected from. These messages are almost always relevant and rarely verbose.
  • aborts: Abort messages. Like status messages, these should really only be turned off when using Fabric as a library, and possibly not even then. Note that even if this output group is turned off, aborts will still occur – there just won’t be any output about why Fabric aborted!
  • warnings: Warning messages. These are often turned off when one expects a given operation to fail, such as when using grep to test existence of text in a file. If paired with setting env.warn_only to True, this can result in fully silent warnings when remote programs fail. As with aborts, this setting does not control actual warning behavior, only whether warning messages are printed or hidden.
  • running: Printouts of commands being executed or files transferred, e.g. [myserver] run: ls /var/www. Also controls printing of tasks being run, e.g. [myserver] Executing task 'foo'.
  • stdout: Local, or remote, stdout, i.e. non-error output from commands.
  • stderr: Local, or remote, stderr, i.e. error-related output from commands.
  • user: User-generated output, i.e. local output printed by fabfile code via use of the ~fabric.utils.fastprint or ~fabric.utils.puts functions.

在 0.9.2 版更改: Added “Executing task” lines to the running output level.

在 0.9.2 版更改: Added the user output level.

Debug output

There is a final atomic output level, debug, which behaves slightly differently from the rest:

  • debug: Turn on debugging (which is off by default.) Currently, this is largely used to view the “full” commands being run; take for example this ~fabric.operations.run call:

    run('ls "/home/username/Folder Name With Spaces/"')
    

    Normally, the running line will show exactly what is passed into ~fabric.operations.run, like so:

    [hostname] run: ls "/home/username/Folder Name With Spaces/"
    

    With debug on, and assuming you’ve left shell set to True, you will see the literal, full string as passed to the remote server:

    [hostname] run: /bin/bash -l -c "ls \"/home/username/Folder Name With Spaces\""
    

    Enabling debug output will also display full Python tracebacks during aborts.

    注解

    Where modifying other pieces of output (such as in the above example where it modifies the ‘running’ line to show the shell and any escape characters), this setting takes precedence over the others; so if running is False but debug is True, you will still be shown the ‘running’ line in its debugging form.

在 1.0 版更改: Debug output now includes full Python tracebacks during aborts.

Output level aliases

In addition to the atomic/standalone levels above, Fabric also provides a couple of convenience aliases which map to multiple other levels. These may be referenced anywhere the other levels are referenced, and will effectively toggle all of the levels they are mapped to.

  • output: Maps to both stdout and stderr. Useful for when you only care to see the ‘running’ lines and your own print statements (and warnings).
  • everything: Includes warnings, running, user and output (see above.) Thus, when turning off everything, you will only see a bare minimum of output (just status and debug if it’s on), along with your own print statements.
  • commands: Includes stdout and running. Good for hiding non-erroring commands entirely, while still displaying any stderr output.

在 1.4 版更改: Added the commands output alias.

Hiding and/or showing output levels

You may toggle any of Fabric’s output levels in a number of ways; for examples, please see the API docs linked in each bullet point:

  • Direct modification of fabric.state.output: fabric.state.output is a dictionary subclass (similar to env) whose keys are the output level names, and whose values are either True (show that particular type of output) or False (hide it.)

    fabric.state.output is the lowest-level implementation of output levels and is what Fabric’s internals reference when deciding whether or not to print their output.

  • Context managers: ~fabric.context_managers.hide and ~fabric.context_managers.show are twin context managers that take one or more output level names as strings, and either hide or show them within the wrapped block. As with Fabric’s other context managers, the prior values are restored when the block exits.

    参见

    ~fabric.context_managers.settings, which can nest calls to ~fabric.context_managers.hide and/or ~fabric.context_managers.show inside itself.

  • Command-line arguments: You may use the --hide and/or --show arguments to fab options and arguments, which behave exactly like the context managers of the same names (but are, naturally, globally applied) and take comma-separated strings as input.

Parallel execution

1.3 新版功能.

By default, Fabric executes all specified tasks serially (see Execution strategy for details.) This document describes Fabric’s options for running tasks on multiple hosts in parallel, via per-task decorators and/or global command-line switches.

What it does

Because Fabric 1.x is not fully threadsafe (and because in general use, task functions do not typically interact with one another) this functionality is implemented via the Python multiprocessing module. It creates one new process for each host and task combination, optionally using a (configurable) sliding window to prevent too many processes from running at the same time.

For example, imagine a scenario where you want to update Web application code on a number of Web servers, and then reload the servers once the code has been distributed everywhere (to allow for easier rollback if code updates fail.) One could implement this with the following fabfile:

from fabric.api import *

def update():
    with cd("/srv/django/myapp"):
        run("git pull")

def reload():
    sudo("service apache2 reload")

and execute it on a set of 3 servers, in serial, like so:

$ fab -H web1,web2,web3 update reload

Normally, without any parallel execution options activated, Fabric would run in order:

  1. update on web1
  2. update on web2
  3. update on web3
  4. reload on web1
  5. reload on web2
  6. reload on web3

With parallel execution activated (via -P – see below for details), this turns into:

  1. update on web1, web2, and web3
  2. reload on web1, web2, and web3

Hopefully the benefits of this are obvious – if update took 5 seconds to run and reload took 2 seconds, serial execution takes (5+2)*3 = 21 seconds to run, while parallel execution takes only a third of the time, (5+2) = 7 seconds on average.

How to use it

Decorators

Since the minimum “unit” that parallel execution affects is a task, the functionality may be enabled or disabled on a task-by-task basis using the ~fabric.decorators.parallel and ~fabric.decorators.serial decorators. For example, this fabfile:

from fabric.api import *

@parallel
def runs_in_parallel():
    pass

def runs_serially():
    pass

when run in this manner:

$ fab -H host1,host2,host3 runs_in_parallel runs_serially

will result in the following execution sequence:

  1. runs_in_parallel on host1, host2, and host3
  2. runs_serially on host1
  3. runs_serially on host2
  4. runs_serially on host3
Command-line flags

One may also force all tasks to run in parallel by using the command-line flag -P or the env variable env.parallel. However, any task specifically wrapped with ~fabric.decorators.serial will ignore this setting and continue to run serially.

For example, the following fabfile will result in the same execution sequence as the one above:

from fabric.api import *

def runs_in_parallel():
    pass

@serial
def runs_serially():
    pass

when invoked like so:

$ fab -H host1,host2,host3 -P runs_in_parallel runs_serially

As before, runs_in_parallel will run in parallel, and runs_serially in sequence.

Bubble size

With large host lists, a user’s local machine can get overwhelmed by running too many concurrent Fabric processes. Because of this, you may opt to use a moving bubble approach that limits Fabric to a specific number of concurrently active processes.

By default, no bubble is used and all hosts are run in one concurrent pool. You can override this on a per-task level by specifying the pool_size keyword argument to ~fabric.decorators.parallel, or globally via -z.

For example, to run on 5 hosts at a time:

from fabric.api import *

@parallel(pool_size=5)
def heavy_task():
    # lots of heavy local lifting or lots of IO here

Or skip the pool_size kwarg and instead:

$ fab -P -z 5 heavy_task

Linewise vs bytewise output

Fabric’s default mode of printing to the terminal is byte-by-byte, in order to support Interaction with remote programs. This often gives poor results when running in parallel mode, as the multiple processes may write to your terminal’s standard out stream simultaneously.

To help offset this problem, Fabric’s option for linewise output is automatically enabled whenever parallelism is active. This will cause you to lose most of the benefits outlined in the above link Fabric’s remote interactivity features, but as those do not map well to parallel invocations, it’s typically a fair trade.

There’s no way to avoid the multiple processes mixing up on a line-by-line basis, but you will at least be able to tell them apart by the host-string line prefix.

注解

Future versions will add improved logging support to make troubleshooting parallel runs easier.

SSH behavior

Fabric currently makes use of a pure-Python SSH re-implementation for managing connections, meaning that there are occasionally spots where it is limited by that library’s capabilities. Below are areas of note where Fabric will exhibit behavior that isn’t consistent with, or as flexible as, the behavior of the ssh command-line program.

Unknown hosts

SSH’s host key tracking mechanism keeps tabs on all the hosts you attempt to connect to, and maintains a ~/.ssh/known_hosts file with mappings between identifiers (IP address, sometimes with a hostname as well) and SSH keys. (For details on how this works, please see the OpenSSH documentation.)

The paramiko library is capable of loading up your known_hosts file, and will then compare any host it connects to, with that mapping. Settings are available to determine what happens when an unknown host (a host whose username or IP is not found in known_hosts) is seen:

  • Reject: the host key is rejected and the connection is not made. This results in a Python exception, which will terminate your Fabric session with a message that the host is unknown.
  • Add: the new host key is added to the in-memory list of known hosts, the connection is made, and things continue normally. Note that this does not modify your on-disk known_hosts file!
  • Ask: not yet implemented at the Fabric level, this is a paramiko library option which would result in the user being prompted about the unknown key and whether to accept it.

Whether to reject or add hosts, as above, is controlled in Fabric via the env.reject_unknown_hosts option, which is False by default for convenience’s sake. We feel this is a valid tradeoff between convenience and security; anyone who feels otherwise can easily modify their fabfiles at module level to set env.reject_unknown_hosts = True.

Known hosts with changed keys

The point of SSH’s key/fingerprint tracking is so that man-in-the-middle attacks can be detected: if an attacker redirects your SSH traffic to a computer under his control, and pretends to be your original destination server, the host keys will not match. Thus, the default behavior of SSH (and its Python implementation) is to immediately abort the connection when a host previously recorded in known_hosts suddenly starts sending us a different host key.

In some edge cases such as some EC2 deployments, you may want to ignore this potential problem. Our SSH layer, at the time of writing, doesn’t give us control over this exact behavior, but we can sidestep it by simply skipping the loading of known_hosts – if the host list being compared to is empty, then there’s no problem. Set env.disable_known_hosts to True when you want this behavior; it is False by default, in order to preserve default SSH behavior.

警告

Enabling env.disable_known_hosts will leave you wide open to man-in-the-middle attacks! Please use with caution.

Defining tasks

As of Fabric 1.1, there are two distinct methods you may use in order to define which objects in your fabfile show up as tasks:

  • The “new” method starting in 1.1 considers instances of ~fabric.tasks.Task or its subclasses, and also descends into imported modules to allow building nested namespaces.
  • The “classic” method from 1.0 and earlier considers all public callable objects (functions, classes etc) and only considers the objects in the fabfile itself with no recursing into imported module.

注解

These two methods are mutually exclusive: if Fabric finds any new-style task objects in your fabfile or in modules it imports, it will assume you’ve committed to this method of task declaration and won’t consider any non-~fabric.tasks.Task callables. If no new-style tasks are found, it reverts to the classic behavior.

The rest of this document explores these two methods in detail.

注解

To see exactly what tasks in your fabfile may be executed via fab, use fab --list.

New-style tasks

Fabric 1.1 introduced the ~fabric.tasks.Task class to facilitate new features and enable some programming best practices, specifically:

  • Object-oriented tasks. Inheritance and all that comes with it can make for much more sensible code reuse than passing around simple function objects. The classic style of task declaration didn’t entirely rule this out, but it also didn’t make it terribly easy.
  • Namespaces. Having an explicit method of declaring tasks makes it easier to set up recursive namespaces without e.g. polluting your task list with the contents of Python’s os module (which would show up as valid “tasks” under the classic methodology.)

With the introduction of ~fabric.tasks.Task, there are two ways to set up new tasks:

  • Decorate a regular module level function with @task <fabric.decorators.task>, which transparently wraps the function in a ~fabric.tasks.Task subclass. The function name will be used as the task name when invoking.
  • Subclass ~fabric.tasks.Task (~fabric.tasks.Task itself is intended to be abstract), define a run method, and instantiate your subclass at module level. Instances’ name attributes are used as the task name; if omitted the instance’s variable name will be used instead.

Use of new-style tasks also allows you to set up namespaces.

The @task decorator

The quickest way to make use of new-style task features is to wrap basic task functions with @task <fabric.decorators.task>:

from fabric.api import task, run

@task
def mytask():
    run("a command")

When this decorator is used, it signals to Fabric that only functions wrapped in the decorator are to be loaded up as valid tasks. (When not present, classic-style task behavior kicks in.)

Arguments

@task <fabric.decorators.task> may also be called with arguments to customize its behavior. Any arguments not documented below are passed into the constructor of the task_class being used, with the function itself as the first argument (see Using custom subclasses with @task for details.)

  • task_class: The ~fabric.tasks.Task subclass used to wrap the decorated function. Defaults to ~fabric.tasks.WrappedCallableTask.
  • aliases: An iterable of string names which will be used as aliases for the wrapped function. See Aliases for details.
  • alias: Like aliases but taking a single string argument instead of an iterable. If both alias and aliases are specified, aliases will take precedence.
  • default: A boolean value determining whether the decorated task also stands in for its containing module as a task name. See Default tasks.
  • name: A string setting the name this task appears as to the command-line interface. Useful for task names that would otherwise shadow Python builtins (which is technically legal but frowned upon and bug-prone.)
Aliases

Here’s a quick example of using the alias keyword argument to facilitate use of both a longer human-readable task name, and a shorter name which is quicker to type:

from fabric.api import task

@task(alias='dwm')
def deploy_with_migrations():
    pass

Calling --list on this fabfile would show both the original deploy_with_migrations and its alias dwm:

$ fab --list
Available commands:

    deploy_with_migrations
    dwm

When more than one alias for the same function is needed, simply swap in the aliases kwarg, which takes an iterable of strings instead of a single string.

Default tasks

In a similar manner to aliases, it’s sometimes useful to designate a given task within a module as the “default” task, which may be called by referencing just the module name. This can save typing and/or allow for neater organization when there’s a single “main” task and a number of related tasks or subroutines.

For example, a deploy submodule might contain tasks for provisioning new servers, pushing code, migrating databases, and so forth – but it’d be very convenient to highlight a task as the default “just deploy” action. Such a deploy.py module might look like this:

from fabric.api import task

@task
def migrate():
    pass

@task
def push():
    pass

@task
def provision():
    pass

@task
def full_deploy():
    if not provisioned:
        provision()
    push()
    migrate()

With the following task list (assuming a simple top level fabfile.py that just imports deploy):

$ fab --list
Available commands:

    deploy.full_deploy
    deploy.migrate
    deploy.provision
    deploy.push

Calling deploy.full_deploy on every deploy could get kind of old, or somebody new to the team might not be sure if that’s really the right task to run.

Using the default kwarg to @task <fabric.decorators.task>, we can tag e.g. full_deploy as the default task:

@task(default=True)
def full_deploy():
    pass

Doing so updates the task list like so:

$ fab --list
Available commands:

    deploy
    deploy.full_deploy
    deploy.migrate
    deploy.provision
    deploy.push

Note that full_deploy still exists as its own explicit task – but now deploy shows up as a sort of top level alias for full_deploy.

If multiple tasks within a module have default=True set, the last one to be loaded (typically the one lowest down in the file) will take precedence.

Top-level default tasks

Using @task(default=True) in the top level fabfile will cause the denoted task to execute when a user invokes fab without any task names (similar to e.g. make.) When using this shortcut, it is not possible to specify arguments to the task itself – use a regular invocation of the task if this is necessary.

Task subclasses

If you’re used to classic-style tasks, an easy way to think about ~fabric.tasks.Task subclasses is that their run method is directly equivalent to a classic task; its arguments are the task arguments (other than self) and its body is what gets executed.

For example, this new-style task:

class MyTask(Task):
    name = "deploy"
    def run(self, environment, domain="whatever.com"):
        run("git clone foo")
        sudo("service apache2 restart")

instance = MyTask()

is exactly equivalent to this function-based task:

@task
def deploy(environment, domain="whatever.com"):
    run("git clone foo")
    sudo("service apache2 restart")

Note how we had to instantiate an instance of our class; that’s simply normal Python object-oriented programming at work. While it’s a small bit of boilerplate right now – for example, Fabric doesn’t care about the name you give the instantiation, only the instance’s name attribute – it’s well worth the benefit of having the power of classes available.

We plan to extend the API in the future to make this experience a bit smoother.

Using custom subclasses with @task

It’s possible to marry custom ~fabric.tasks.Task subclasses with @task <fabric.decorators.task>. This may be useful in cases where your core execution logic doesn’t do anything class/object-specific, but you want to take advantage of class metaprogramming or similar techniques.

Specifically, any ~fabric.tasks.Task subclass which is designed to take in a callable as its first constructor argument (as the built-in ~fabric.tasks.WrappedCallableTask does) may be specified as the task_class argument to @task <fabric.decorators.task>.

Fabric will automatically instantiate a copy of the given class, passing in the wrapped function as the first argument. All other args/kwargs given to the decorator (besides the “special” arguments documented in Arguments) are added afterwards.

Here’s a brief and somewhat contrived example to make this obvious:

from fabric.api import task
from fabric.tasks import Task

class CustomTask(Task):
    def __init__(self, func, myarg, *args, **kwargs):
        super(CustomTask, self).__init__(*args, **kwargs)
        self.func = func
        self.myarg = myarg

    def run(self, *args, **kwargs):
        return self.func(*args, **kwargs)

@task(task_class=CustomTask, myarg='value', alias='at')
def actual_task():
    pass

When this fabfile is loaded, a copy of CustomTask is instantiated, effectively calling:

task_obj = CustomTask(actual_task, myarg='value')

Note how the alias kwarg is stripped out by the decorator itself and never reaches the class instantiation; this is identical in function to how command-line task arguments work.

Namespaces

With classic tasks, fabfiles were limited to a single, flat set of task names with no real way to organize them. In Fabric 1.1 and newer, if you declare tasks the new way (via @task <fabric.decorators.task> or your own ~fabric.tasks.Task subclass instances) you may take advantage of namespacing:

  • Any module objects imported into your fabfile will be recursed into, looking for additional task objects.
  • Within submodules, you may control which objects are “exported” by using the standard Python __all__ module-level variable name (thought they should still be valid new-style task objects.)
  • These tasks will be given new dotted-notation names based on the modules they came from, similar to Python’s own import syntax.

Let’s build up a fabfile package from simple to complex and see how this works.

Basic

We start with a single __init__.py containing a few tasks (the Fabric API import omitted for brevity):

@task
def deploy():
    ...

@task
def compress():
    ...

The output of fab --list would look something like this:

deploy
compress

There’s just one namespace here: the “root” or global namespace. Looks simple now, but in a real-world fabfile with dozens of tasks, it can get difficult to manage.

Importing a submodule

As mentioned above, Fabric will examine any imported module objects for tasks, regardless of where that module exists on your Python import path. For now we just want to include our own, “nearby” tasks, so we’ll make a new submodule in our package for dealing with, say, load balancers – lb.py:

@task
def add_backend():
    ...

And we’ll add this to the top of __init__.py:

import lb

Now fab --list shows us:

deploy
compress
lb.add_backend

Again, with only one task in its own submodule, it looks kind of silly, but the benefits should be pretty obvious.

Going deeper

Namespacing isn’t limited to just one level. Let’s say we had a larger setup and wanted a namespace for database related tasks, with additional differentiation inside that. We make a sub-package named db/ and inside it, a migrations.py module:

@task
def list():
    ...

@task
def run():
    ...

We need to make sure that this module is visible to anybody importing db, so we add it to the sub-package’s __init__.py:

import migrations

As a final step, we import the sub-package into our root-level __init__.py, so now its first few lines look like this:

import lb
import db

After all that, our file tree looks like this:

.
├── __init__.py
├── db
│   ├── __init__.py
│   └── migrations.py
└── lb.py

and fab --list shows:

deploy
compress
lb.add_backend
db.migrations.list
db.migrations.run

We could also have specified (or imported) tasks directly into db/__init__.py, and they would show up as db.<whatever> as you might expect.

Limiting with __all__

You may limit what Fabric “sees” when it examines imported modules, by using the Python convention of a module level __all__ variable (a list of variable names.) If we didn’t want the db.migrations.run task to show up by default for some reason, we could add this to the top of db/migrations.py:

__all__ = ['list']

Note the lack of 'run' there. You could, if needed, import run directly into some other part of the hierarchy, but otherwise it’ll remain hidden.

Switching it up

We’ve been keeping our fabfile package neatly organized and importing it in a straightforward manner, but the filesystem layout doesn’t actually matter here. All Fabric’s loader cares about is the names the modules are given when they’re imported.

For example, if we changed the top of our root __init__.py to look like this:

import db as database

Our task list would change thusly:

deploy
compress
lb.add_backend
database.migrations.list
database.migrations.run

This applies to any other import – you could import third party modules into your own task hierarchy, or grab a deeply nested module and make it appear near the top level.

Nested list output

As a final note, we’ve been using the default Fabric --list output during this section – it makes it more obvious what the actual task names are. However, you can get a more nested or tree-like view by passing nested to the --list-format option:

$ fab --list-format=nested --list
Available commands (remember to call as module.[...].task):

    deploy
    compress
    lb:
        add_backend
    database:
        migrations:
            list
            run

While it slightly obfuscates the “real” task names, this view provides a handy way of noting the organization of tasks in large namespaces.

Classic tasks

When no new-style ~fabric.tasks.Task-based tasks are found, Fabric will consider any callable object found in your fabfile, except the following:

  • Callables whose name starts with an underscore (_). In other words, Python’s usual “private” convention holds true here.
  • Callables defined within Fabric itself. Fabric’s own functions such as ~fabric.operations.run and ~fabric.operations.sudo will not show up in your task list.
Imports

Python’s import statement effectively includes the imported objects in your module’s namespace. Since Fabric’s fabfiles are just Python modules, this means that imports are also considered as possible classic-style tasks, alongside anything defined in the fabfile itself.

注解

This only applies to imported callable objects – not modules. Imported modules only come into play if they contain new-style tasks, at which point this section no longer applies.

Because of this, we strongly recommend that you use the import module form of importing, followed by module.callable(), which will result in a cleaner fabfile API than doing from module import callable.

For example, here’s a sample fabfile which uses urllib.urlopen to get some data out of a webservice:

from urllib import urlopen

from fabric.api import run

def webservice_read():
    objects = urlopen('http://my/web/service/?foo=bar').read().split()
    print(objects)

This looks simple enough, and will run without error. However, look what happens if we run fab --list on this fabfile:

$ fab --list
Available commands:

  webservice_read   List some directories.
  urlopen           urlopen(url [, data]) -> open file-like object

Our fabfile of only one task is showing two “tasks”, which is bad enough, and an unsuspecting user might accidentally try to call fab urlopen, which probably won’t work very well. Imagine any real-world fabfile, which is likely to be much more complex, and hopefully you can see how this could get messy fast.

For reference, here’s the recommended way to do it:

import urllib

from fabric.api import run

def webservice_read():
    objects = urllib.urlopen('http://my/web/service/?foo=bar').read().split()
    print(objects)

It’s a simple change, but it’ll make anyone using your fabfile a bit happier.

API 文档

Fabric 维护了两套 API 文档,它们都是根据代码中的 docstring 自动生成的,也十分详尽。

核心 API

核心 API 是指构成 Fabric 基础构建块的函数、类和方法(例如 ~fabric.operations.run~fabric.operations.sudo)。而其他部分(下文的“扩展 API”和用户的 fabfile)都是在这些核心 API 的基础之上构建的。

扩展 API

Fabric 的 扩展 包包括常用而有用的工具(通常是从用户的 fabfile 中合并进来的),可用于用户 I/O、修改远程文件等任务中。核心 API 倾向于保持小巧、不随意变更,扩展包则会随着更多的用户案例被解决并添加进来,而不断成长进化(同时尽量保持向后兼容)。

Console Output Utilities

Django Integration

File and Directory Management

Project Tools