Difference between revisions of "Managing multiple SSH agents"
← Older edit
Newer edit →
Managing multiple SSH agents (view source)
Revision as of 11:42, 17 January 2020
7,640 BYTES REMOVED
,  1 YEAR AGO
Remove old information that's really not relevant anymore.
 
export SSH_AUTH_SOCK="/run/user/1000/ssh-cloud.socket"
 
=== The simplest solution ===
There is an easy answer to this problem, though it's not very flexible. Run two terminals on your workstation. Load a fresh agent in one of them. Always use one to connect to Toolforge/CloudVPS and the other to connect other places.
 
=== A more complex solution ===
:'' The items listed here are entirely untested by current staff, and left over from the past.''
This solution has the advantage of being able to connect to Toolforge/CloudVPS or other hosts indiscriminately from any terminal running on your workstation (or in screen) etc. It protects you against accidentally attempting to authenticate against Toolforge/CloudVPS with the wrong key.
 
==== Setup ====
This solution assumes you are running bash as your local shell. It can probably be adapted for other shells with minimal effort. It involves creating a socket connected to your ssh-agent at a predictable location and using a bash function to change your environment to use the Toolforge/CloudVPS agent when connecting to Toolforge/CloudVPS.
 
This solution is also geared towards running [http://www.gnu.org/software/screen/ screen]. It's a little more complicated than necessary because when disconnecting then reconnecting to a screen session, the SSH_AUTH_SOCK has usually changed. We override that with a predictable location so that as the agent moves around the old screen sessions still have access to the current agent.
 
We start by creating a socket that can talk to our regular agent at a predictable location every time we start a new shell. In .bashrc:
if [ -f ~/.persistent_agent ]; then source ~/.persistent_agent; fi
persistent_agent /tmp/$USER-ssh-agent/valid-agent
Next we set up a function specifically for connecting to Toolforge/CloudVPS
# ssh into Toolforge/CloudVPS with an isolated agent
function cloud() {
oldagent=$SSH_AUTH_SOCK
SSH_AUTH_SOCK=''
persistent_agent /tmp/$USER-ssh-agent/cloud-agent
# add the key if necessary
if ! ssh-add -l | grep -q cloud-key-rsa; then
ssh-add ~/.ssh/cloud-key-rsa
fi
ssh -A -D 8080 bastion.wmflabs.org
SSH_AUTH_SOCK=$oldagent
}
And one to copy content into Toolforge/CloudVPS (scp into Toolforge/CloudVPS)
# scp into Toolforge/CloudVPS with an isolated agent
function cloudcp() {
oldagent=$SSH_AUTH_SOCK
SSH_AUTH_SOCK=''
persistent_agent /tmp/$USER-ssh-agent/cloud-agent
# add the key if necessary
if ! ssh-add -l | grep -q cloud-key-rsa; then
ssh-add ~/.ssh/cloud-key-rsa
fi
scp "$@"
SSH_AUTH_SOCK=$oldagent
}
Last, we make sure we clean up our old agents if we completely disconnect from the system otherwise we'll wind up with the agent running even when we're not connected to Toolforge/CloudVPS. This is a little tricky because we don't want to kill the agent when we close the first connection we made to Toolforge/CloudVPS but only when we're actually done working. As a proxy for 'done working', I use 'I log out of the last shell i have open on this system'. This is not a great solution because if the connection dies or I just quit Terminal or something like that instead of specifically logging out, .bash_logout doesn't get run. Add to .bash_logout:
# if this is the last copy of my shell exiting the host and there are any agents running, kill them.
if [ $(w | grep $USER | wc -l) -eq 1 ]; then
pkill ssh-agent
fi
Just for good measure, let's throw a line in my user crontab that will kill any agents running if I'm not logged in:
# if I'm not logged in, kill any of my running ssh-agents.
* * * * * if ! /usr/bin/w | /bin/grep ben ; then /usr/bin/pkill ssh-agent; fi > /dev/null 2>&1
 
Finally, here is the code for the persistent_agent function
## preconditions and effects:
## $validagent already exists and works, in which case we do nothing
## SSH_AUTH_SOCK contains a valid running agent, in which case we update $validagent to use that socket
## SSH_AUTH_SOCK is empty, in which case we start a new agent and point $validagent at that.
## SSH_AUTH_SOCK exists but doesn't actually connect to an agent and there's no existing validagent; we'll start a new one.
## end result:
## validagent always points to a running agent, either local or your existing forwarded agent
function persistent_agent() {
validagent=$1
validagentdir=$(dirname ${validagent})
# if it's not a directory or it doesn't exist, make it.
if [ ! -d ${validagentdir} ]
then
# just in case it's a file
rm -f ${validagentdir}
mkdir -p ${validagentdir}
chmod 700 ${validagentdir}
fi
# only proceed if it's owned by me
if [ -O ${validagentdir} ]
then
# update the timestamp on the directory to make sure tmpreaper doesn't delete it
touch ${validagentdir}
# if the validagent arleady works, we're done
orig_sock=$SSH_AUTH_SOCK
SSH_AUTH_SOCK=${validagent}
if ssh-add -l > /dev/null 2>&1; then
return
fi
SSH_AUTH_SOCK=$orig_sock
# ok, the validagent doesn't arleady work, let's move on towards setting it up.
# if SSH_AUTH_SOCK is a valid agent, we'll use it.
if ssh-add -l > /dev/null 2>&1; then
ln -svf $SSH_AUTH_SOCK $validagent
SSH_AUTH_SOCK=$validagent
return
fi
# note - inverting the order of the previous two tests changes behavior from 'first valid agent gets $validagent' to 'most recent valid agent gets $validagent'.
# ok, at this point SSH_AUTH_SOCK doesn't point to a valid agent (it might be empty or have bad contents)
# let's just start up a new agent and use that.
echo "triggering new agent"
eval $(ssh-agent)
ln -svf $SSH_AUTH_SOCK $validagent
SSH_AUTH_SOCK=$validagent
return
fi
# at this point, I failed to own my $validagentdir. Someone's trying to do something nasty? Who knows.
# I've failed to create a validagent. Announce that and bail.
echo "Failed to create a valid agent - bad ownership of ${validagentdir}"
return
}
 
===== Use =====
Note that I already have my regular key loaded:
ben@green:~$ ssh-add -l
2048 25:9e:91:d5:2f:be:73:e8:ff:37:63:ae:83:5b:33:e1 /Users/ben/.ssh/id_rsa (RSA)
The first time (in a given day) you connect to Toolforge/CloudVPS, you are prompted to enter the passphrase for your key, and when you get to bastion, it can only see your Toolforge/CloudVPS key:
ben@green:~$ cloud
triggering new agent
Agent pid 32638
`/tmp/ben-ssh-agent/cloud-agent' -> `/tmp/ssh-YfZWc32637/agent.32637'
Enter passphrase for /home/ben/.ssh/cloud-key:
Identity added: /home/ben/.ssh/cloud-key (/home/ben/.ssh/cloud-key)
[motd exerpted]
ben@bastion:~$ ssh-add -l
2048 60:a2:b5:a5:fe:47:07:d6:d5:78:50:50:ba:50:14:46 /home/ben/.ssh/cloud-key (RSA)
When connecting the subsequent shells (until the end of the day when you log out of your workstation and all your agents are killed), you are connected without being prompted for your passphrase.
ben@green:~$ cloud
[motd exerpted]
ben@bastion:~$
Copying files means just using cloudcp instead of scp:
ben@green:~$ cloudcp foo bastion.wmflabs.org:/tmp/
foo 100% 43KB 43.0KB/s 00:00
But when you log out of bastion (in any connection), your normal key is once again available for connecting to personal or other hosts:
ben@bastion:~$ logout
Connection to bastion.wmflabs.org closed.
ben@green:~$ ssh-add -l
2048 25:9e:91:d5:2f:be:73:e8:ff:37:63:ae:83:5b:33:e1 /Users/ben/.ssh/id_rsa (RSA)
Giuseppe Lavagetto
BUREAUCRATS, CONTENT ADMINISTRATORS, ADMINISTRATORS
687
EDITS
Wikitech
Privacy policy
Terms of Use
Desktop
HomeRandomLog in Settings DonateAbout WikitechDisclaimers