37

I'm using below command to transfer files cross server

scp -rc blowfish /source/directory/* [email protected]:/destination/directory

Is there a way to transfer only files modified files just like update command for cp?

3 Answers 3

65

rsync is your friend.

rsync -ru /source/directory/* [email protected]:/destination/directory

If you want it to delete files at the destination that no longer exist at the source, add the --delete option.

6
  • 5
    Although, if you are syncing to a web server and that server caches HTML, you might not want to use --delete, since visitors on a stale page might request an asset that no longer exists. May 24, 2016 at 21:56
  • @Jackson a clean process will not include variable data like caches or configs in the sources. Nov 17, 2018 at 7:40
  • In some cases I have deal with servers outside of my domain that don't have rsync but have scp. Is there a comparable solution, even if it needs a few lines of scripting?
    – jacobq
    Dec 21, 2018 at 14:58
  • 1
    The answer is helpful, but the question was can this be achieved with scp itself.
    – Mladen B.
    Nov 2, 2019 at 20:02
  • 1
    @MladenB. How it can be achieved by scp? Dec 13, 2019 at 4:48
10

Generally one asks for scp because there is a reason. I.e. can't install rsyncd on the target.

files=`find . -newermt "-3600 secs"`

for file in $files
do
       sshpass -p "" scp "$file" "root@$IP://usr/local/www/current/$file"
done
1
  • Note that "-cmin N" is maybe a tad simpler. While N is a number of minutes the argument can be also a fractional number, e.g. "-cmin -0.5" gives you the files created in the last 30 seconds. In my experiments the argument had to be negative, tough. Feb 16 at 9:07
0

Another option:

remote_sum=$(ssh ${remote} sha256sum ${dest_filename})
remote_sum=${remote_sum%% *}
local_sum=$(sha256sum ${local_filename})
local_sum=${local_sum%% *}
if [[ ${local_sum} != ${remote_sum} ]]; then
  scp ${local_filename} ${remote}:${remote_filename}
fi

This is okay for one file but will be a bit slow for lots of files, depending on how quickly SSH can do repeated connections. If you have controlmasters set up on your SSH connection it might not be too bad. If you need to copy a while directory tree recursively, you could do a single SSH command that does sums on all the files, shove the result into a bash associative array, do the same on the local host, then compare file sums to decide whether to do the copy. But it's almost certainly easier to install rsync on the remote.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .