Fixing recursive uploads with lftp: The tale of the rogue symbolic link
I've been setting up continuous deployment recently for an application I'm working on, and as part of this process I'm uploading the release with sftp, using a restricted user account that is both chrooted (though I use a subfolder of the home directory to be extra-sure) and doesn't have shell access.
Since the application is written in PHP, I use composer to manage the server-side PHP library dependencies - which works very well. The problems start when I try to upload the whole thing to the server - so I thought I'd make a quick post here on how I fixed it.
In a previous build step, I generate an archive for the release, and put it in the continuous integration (CI) archive folder.
In the deployment phase, it unpacks this compressed archive and then uploads it to the production server with lftp
, because I need to do some fiddling about that I can't do with regular sftp
(anyone up for a tutorial on this? I'd be happy to write a few posts on this). However, I kept getting this weird error in the CI logs:
lftp: MirrorJob.cc:242: void MirrorJob::JobFinished(Job*): Assertion `transfer_count>0' failed.
./lantern-build-engine/lantern.sh: line 173: 5325 Aborted $command_name $@
Very strange indeed! Apparently, lftp
isn't known for outputting especially useful error messages when used in an automated script like this. I tried everything. I rewrote, refactored, and completely turned the whole thing upside-down multiple times. This, as you might have guessed, took quite a while.
Commits aside, it was only when I refactored it to do the upload via the regular sftp
command like this that it became apparent what the problem was:
sftp -i "${SSH_KEY_PATH}" -P "${deploy_ssh_port}" -o PasswordAuthentication=no "${deploy_ssh_user}@${deploy_ssh_host}" << SFTPCOMMANDS
mkdir ${deploy_root_dir}/www-new
put -r ${source_upload_dir}/* ${deploy_root_dir}/www-new
bye
SFTPCOMMANDS
Thankfully, sftp
outputs much more helpful error messages. I saw this in the CI logs:
.....
Entering /tmp/tmp.ssR3j7vGhC-air-quality-upload//vendor/nikic/php-parser/bin
Entering /tmp/tmp.ssR3j7vGhC-air-quality-upload//vendor/bin
php-parse: not a regular file
The last line there instantly told me what I needed to know: It was failing to upload a symbolic link.
The solution here was simple: Unwind the symbolic links into hard links instead, and then I'll still get the benefit of a link on the local disk, but sftp
will treat it as a regular file and upload a duplicate.
This is done like so:
find "${temp_dir}" -type l -exec bash -c 'ln -f "$(readlink -m "$0")" "$0"' {} \;
Thanks to SuperUser for the above (though I would have expected to find it on the Unix Stack Exchange).
If you'd like to see the full deployment script I've written, you can find it here.
There's actually quite a bit of context to how I ended up encountering this problem in the first place - which includes things like CI servers, no small amount of bash scripting, git servers, and remote deployment.
In the future, I'd like to make a few posts about the exploration I've been doing in these areas - perhaps along the lines of "how did we get here?", as I think they'd make for interesting reading.....