[clug] A "How to" SSH question: copy file (scp/sftp) to remote system, then trigger a handler on remote.

Brendan Jurd direvus at gmail.com
Thu Jan 21 06:17:10 UTC 2016


I would probably omit the SSH file copy step entirely, instead have a
server process sitting there, ready to receive files, which processes the
file, and then returns a meaningful response code.  A RESTful web service
would work nicely: respond to POST requests with an appropriate HTTP code.

Another option would be to just write a simple TCP socket server that
performs a similar function, the API can be whatever makes sense for your
use case.

It is possible to set up a daemon on the remote side to respond to file
activity ("inotify"), but in my experience inotify has a lot of tricky
corner cases.

On Thu, 21 Jan 2016 at 16:47 steve jenkin <sjenkin at canb.auug.org.au> wrote:

> I’m looking to move files to a remote system with ssh and have the copy
> trigger something to deal with them.
>
> scp “uses ssh for data transfer”, then sets owner/group, permissions &
> timestamp of the remote file.
> <https://en.wikipedia.org/wiki/Secure_copy>
>
> It’s possible to code scp as an ssh command, like:
>         ssh remote ‘cat >tmp;chmod user.group tmp;touch +hhmmss tmp;mv tmp
> destfile’ <srcfile
>
> I’d like to trigger a command on the remote system to do something with
> the recently arrived file, i.e. “pick it up” and process it in some way.
>
> In the above pseudo-code above:
>         ssh remote ‘cat >tmp;chmod user.group tmp;touch +hhmmss tmp;mv tmp
> destfile;exec /path/to/command destfile&’ <srcfile
>
> On some systems, I’ve implemented an ‘incoming’ directory with a
> background daemon that regularly scans the directory and process the files
> one by one if found, otherwise sleeps till next scan.
>
> Not what I want to do here.
>
> An alternative is to execute first the processing command and just pipe
> the data to it via STDIN, passing in a
> Problem with this is:
>  a) organising temp dir for large files, and
>  b) overloading receiving/processing host with too many simultaneous calls
>
> Questions:
>
> 1. has anyone seen this done? How?
>    There are going to be a lot of subtle errors and race-conditions, not
> the least, clashing filenames from simultaneous copies.
>     I’m not looking to rediscover them for myself. Life is too short :(
>
> 2. Is there a standard/semi-std SSH sub-system on the server side to
> perform this copy/trigger function?
>     Even non-standard, like creating automatic triggers on specific
> file-system activity. [Is that a thing in modern linux’s?]
>
> 3. For extra points, having delivered a file from my local system, I
> shouldn’t delete it before it’s saved & processed on the remote.
>      [in DB terms, a restartable transaction]
>    That implies either the sending SSH to hang around, waiting for a
> signal of some sort,
>      or some kind of return signal / file to be returned on next
> connection.
>    Also, I don’t ever want to get stuck in a loop of ‘send file, crash
> remote handler, resend file, crash remote, <till end of time>’
>    Seen that more than once :(
>
> Thanks in Advance
> steve
>
> --
> Steve Jenkin, IT Systems and Design
> 0412 786 915 (+61 412 786 915)
> PO Box 48, Kippax ACT 2615, AUSTRALIA
>
> mailto:sjenkin at canb.auug.org.au http://members.tip.net.au/~sjenkin
>
>
> --
> linux mailing list
> linux at lists.samba.org
> https://lists.samba.org/mailman/listinfo/linux
>


More information about the linux mailing list