Subject: Re: curl / libssh2 sftp write performance (with patch)

Re: curl / libssh2 sftp write performance (with patch)

From: Daniel Jeliński <>
Date: Sun, 26 Aug 2018 21:24:26 +0200

niedz., 26 sie 2018 o 10:57 Daniel Stenberg <> napisał(a):

> So will it not care for any ACKs? If you send a 10GB file and the first packet
> is never acked? Maybe a limit for amount of outstanding un-acked data?

No, not like that; available acks are processed after every successful
send. We just don't wait for acks that are not available yet, at least
in nonblocking mode. I don't know how the code would behave in
blocking mode, I should probably check that.

Side note, I had a coding bug that caused just the behavior you
describe, and I sent 1GB file before checking for any acks. This
resulted in a dramatic slowdown when sftp_close was looking for its
ack in the unprocessed list of received packets.

Limiting the number of outstanding packets sounds reasonable; it would
protect us against evil ssh servers. However, I don't see an easy way
to figure out a number that would never limit our transfer rates. Need
to think it through.

> Can we make users opt-in to this and if not, do like before?

How would you suggest to implement that? I don't see any existing
mechanism that could be used for this; there's no libssh2_sftp_setopt,
no version argument in libssh2_sftp_init, no extra parameter to
libssh2_sftp_open that could be used for this.
I could implement this as a new set of functions duplicating the
existing functionality (like posix defines write and fwrite, I could
add libssh2_sftp_fwrite), if you think that's the right direction to

Received on 2018-08-26