Originally I thought this would be easy to support - we'd just
pick a machine at random from a given set of machines in a role.
But for it to really work like a build server, it would need to be
quite a bit smarter - Octopus would have to be aware of what
activities each machine is running, and queue them intelligently or
use a pull-based model (it would be annoying if Octopus picked a
machine that was already busy doing something else).
Supporting that is a pretty significant architecture change, so
not something we've tried to do. We are thinking about it this year
The workaround is usually to just use a build server - some
people use TeamCity to trigger deployments with Octopus, then run
some tasks in TeamCity, then do more things in Octopus.
please let us know when you have updates on that. We have
exactly the same use case for database deployments.
Currently our approach is to have one tentacle per environment, but
as our environment and projects grow it's starting to get hard to
maintain. We are working on some scripting to make that easier and
I don't yet have the code, but I did work out a plan using some
The plan is to have X number of tentacles with the role of
'db-deploy' that will exist in all environments. (Call these
Server1, Server2, etc.) When a deployment starts for any project,
it will first get a mutex on the Octopus server and create a
file-based lock (using a simple .txt file) for Server1. When a
second deployment starts, it checks to see if there is a txt file
locking Server1. When it sees there is, it then creates a lock for
Server2. This continues, with each deployment using a file-based
locking system to determine which server to deploy to. Once it
finds one, it outputs a variable for which server it will use.
The database deployment step will be set to run on 'db-deploy'
and the first part of that deployment will be to check the output
variable for which server it should run on. All other servers
immediately return, while the actual db deploy runs on the
The final step in the project, which runs on success or failure,
will be to unlock the file.
Note that this process would require the
OctopusBypassDeploymentMutex variable for all projects, which could
cause other issues.
I already have the file locking code done, which we use to limit
simultaneous deployments of certain types of projects. I can share
that code perhaps later in the week. The code for determining the
db server shouldn't be too hard to add on, but I haven't had time
to do that yet.
We currently assign a role of "DbDeployer" to one of the nodes and restrict the DbDeploy-step to only run on that role. But it has the unpleasant side-effect of disabling db deploys if that node is taken out of the environment for maintenance or other reasons, which might not be obvious to anyone in the team but me.
The use case for us is that we have octopus packages that contain msi's to be deployed to several workstations. We also need to copy the msi's to a network share. We routinely get failed deployments due to multiple tentacles attempting to copy the msi to the network share even though we check for the existence of the file first. Ideally, we'd be able to specify that a step should only run on one of the available tentacles. I can see how this would be useful for database migrators or other "do once and forget" jobs.