gnunet-svn
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[gnurl] branch master updated (3671d2089 -> 63b81ac1e)


From: gnunet
Subject: [gnurl] branch master updated (3671d2089 -> 63b81ac1e)
Date: Thu, 07 Nov 2019 00:08:16 +0100

This is an automated email from the git hooks/post-receive script.

ng0 pushed a change to branch master
in repository gnurl.

    from 3671d2089 fix
     new 08f96982a ldap: Stop using wide char version of ldapp_err2string
     new 142d89edb winbuild/MakefileBuild.vc: Fix line endings
     new a765a3050 winbuild/MakefileBuild.vc: Add vssh
     new e34ec7de5 asyn-thread: s/AF_LOCAL/AF_UNIX for Solaris
     new 0aef91411 setopt: make it easier to add new enum values
     new 2c4590010 curlver: bump to 7.66.1
     new f83b2f1ae RELEASE-NOTES: synced
     new 4e3dfe332 docs/HTTP3: fix `--with-ssl` ngtcp2 configure flag
     new a56a47ac3 openssl: close_notify on the FTP data connection doesn't 
mean closure
     new b543f1fad curl:file2string: load large files much faster
     new 83b4cfacb parsedate: still provide the name arrays when disabled
     new 1ca91bcdb curl: fix memory leaked by parse_metalink()
     new acf1d2acd FTP: skip CWD to entry dir when target is absolute
     new 65f5b958c FTP: allow "rubbish" prepended to the SIZE response
     new 5977664d2 appveyor: add a winbuild
     new df26f5f9c CI: inintial github action job
     new 4a2d47e0b docs: fix typo in CURLOPT_HTTP_VERSION man
     new 5eb75d418 docs: remove trailing ':' from section names in 
CURLOPT_TRAILER* man
     new b76660272 doh: fix (harmless) buffer overrun
     new dda418266 doh: fix undefined behaviour and open up for gcc and clang 
optimization
     new a0f8fccb1 openssl: fix warning with boringssl and 
SSL_CTX_set_min_proto_version
     new 00da83415 quiche: persist connection details
     new 6de105369 smb: check for full size message before reading message 
details
     new 3ad883aed unit1655: make it C90 compliant
     new 9bc44ff64 doh: clean up dangling DOH handles and memory on easy close
     new 7c596f5de http2: relax verification of :authority in push promise 
requests
     new beb435091 url: cleanup dangling DOH request headers too
     new ac58c51b2 mime: when disabled, avoid C99 macro
     new 1c02a4e87 FTP: remove trailing slash from path for LIST/MLSD
     new 2a2404153 http: merge two "case" statements
     new fafad1496 README: add OSS-Fuzz badge [skip ci]
     new 3c5f9ba89 url: only reuse TLS connections with matching pinning
     new 2d55460ec RELEASE-NOTES: synced
     new 346188f6e version: next release will be 7.67.0
     new 0a4ecbdf1 urlapi: CURLU_NO_AUTHORITY allows empty authority/host part
     new 0d59addff doh: avoid truncating DNS QTYPE to lower octet
     new 69ea985d4 http: fix Expression 'http->postdata' is always false
     new 97c17e9fc ftp: part of conditional expression is always true: !result
     new a50c3d7fa ftp: Expression 'ftpc->wait_data_conn' is always true
     new 49f3117a2 ftp: Expression 'ftpc->wait_data_conn' is always false
     new e3c41ebd7 ftp: the conditional expression is always true
     new 3ab45650e url: part of expression is always true: (bundle->multiuse == 
0)
     new 389426e3d url: remove dead code
     new 317c97bd8 version: Expression 'left > 1' is always true
     new 0b90ec9bb netrc: part of conditional expression is always true: !done
     new 2e68e5a02 easy: part of conditional expression is always true: !result
     new 07c1af922 multi: value '2L' is assigned to a boolean
     new d0390a538 imap: merged two case-branches performing the same action
     new cc95dbd64 http_proxy: part of conditional expression is always true: 
!error
     new 2ba62322a mime: make Curl_mime_duppart() assert if called without 
valid dst
     new 8f593f6d3 setopt: store CURLOPT_RTSP_SERVER_CSEQ correctly
     new b10464399 urlapi: part of conditional expression is always true: 
(relurl[0] == '/')
     new a6451487d urlapi: 'scheme' is always true
     new 36fbb1007 urlapi: Expression 'storep' is always true
     new 7d5524500 libssh2: part of conditional expression is always true: 
!result
     new b5a69b7a3 tool_getparam: remove duplicate switch case
     new 2d5f76f22 tool_operate: Expression 'config->resume_from' is always true
     new a89aeb545 tool_operate: removed unused variable 'done'
     new 52db0b89d travis: use go master
     new 698149e42 THANKS-filter: deal with my typos 'Jat' => 'Jay'
     new 63a8d2b17 ngtcp2: compile with latest ngtcp2 + nghttp3 draft-23
     new 47066036a urlapi: avoid index underflow for short ipv6 hostnames
     new 0801343e2 cookie: pass in the correct cookie amount to qsort()
     new 36ff5e37b FTP: FTPFILE_NOCWD: avoid redundant CWDs
     new 0b7d7abe2 appveyor: upgrade VS2017 to VS2019
     new 03ebe66d7 urldata: use 'bool' for the bit type on MSVC compilers
     new fe514ad9a http: fix warning on conversion from int to bit
     new d176a2c7e altsvc: both backends run h3-23 now
     new e09749dd4 travis: enable ngtcp2 h3-23 builds
     new 5ee88eee6 socks: Fix destination host shown on SOCKS5 error
     new f8a205853 curl: exit the create_transfers loop on errors
     new 367e4b3c4 openssl: fix compiler warning with LibreSSL
     new 41db01a39 RELEASE-NOTES: synced
     new 96a3ab7bc winbuild: Add manifest to curl.exe for proper OS version 
detection
     new 527461285 vtls: fix narrowing conversion warnings
     new 0023fce38 http: lowercase headernames for HTTP/2 and HTTP/3
     new bb7420180 doh: return early if there is no time left
     new a5bf6a36c doh: allow only http and https in debug mode
     new 89d972f24 vauth: The parameter 'status' must be surrounded by 
parentheses
     new 32fa04320 quiche: The expression must be surrounded by parentheses
     new 922189676 libssh: The expression is excessive or contains a misprint
     new b7e872ac1 libssh: part of conditional expression is always true
     new 9aed993da libssh: part of conditional expression is always true: 
!result
     new f91b82e68 http2: A value is being subtracted from the unsigned variable
     new b259baabf http2: Expression 'stream->stream_id != - 1' is always true
     new 4a778f75c strcase: fix raw lowercasing the letter X
     new 3e0a8e539 os400: getpeername() and getsockname() return ebcdic AF_UNIX 
sockaddr,
     new 9e78e739a HTTP3.md: move -p for mkdir, remove -j for make
     new 6e7733f78 urlapi: question mark within fragment is still fragment
     new a4c652099 altsvc: save h3 as h3-23
     new 218a62a6c altsvc: correct the #ifdef for the ngtcp2 backend
     new 7c7dac4db travis: move the go install to linux-only
     new af3ced3b9 url: fix the NULL hostname compiler warning case
     new 217812fa9 ngtcp2: remove fprintf() calls
     new cded99370 url: don't set appconnect time for non-ssl/non-ssh 
connections
     new 2078e7701 HTTP3: update quic.aiortc.org + add link to server list
     new 0ab38f5fd openssl: use strerror on SSL_ERROR_SYSCALL
     new 2f036a72d FTP: url-decode path before evaluation
     new 8bdff3528 HTTP3: show an --alt-svc using example too
     new 0ccdec339 HTTP3: merged and simplified the two 'running' sections
     new ea7744a07 Revert "FTP: url-decode path before evaluation"
     new 237746590 quiche: set 'drain' when returning without having drained 
the queues
     new b6532b809 quiche: don't close connection at end of stream!
     new 5f0b55ef2 HTTP3: fix prefix parameter for ngtcp2 build
     new e32488f57 README: minor grammar fix
     new c7e6b71e5 vtls: Fix comment typo about macosx-version-min compiler flag
     new 73089bf7f tests: fix narrowing conversion warnings
     new 500fb0e4c FTP: url-decode path before evaluation
     new a167ab6a1 FTP: add test for FTPFILE_NOCWD: Avoid redundant CWDs
     new 922dcba61 INSTALL: add vcpkg installation instructions
     new ee4cfd35a RELEASE-NOTES: synced
     new ed7350915 setopt: handle ALTSVC set to NULL
     new d0a7ee3f6 cookies: using a share with cookies shouldn't enable the 
cookie engine
     new 00b65e377 docs: disambiguate CURLUPART_HOST is for host name (ie no 
port)
     new 962ad8c5b BINDINGS: added clj-curl
     new 29a51e153 BINDINGS: Kapito is an Erlang library, basically a binding
     new 1c134e9cf BINDINGS: PureBasic, Net::Curl for perl and Nim
     new 19338e972 quiche: update HTTP/3 config creation to new API
     new 666a22675 mailmap: a Lucas fix
     new c24cf6c64 altsvc: accept quoted ma and persist values
     new b59c1e655 git: add tests/server/disabled to .gitignore
     new 79ea0c765 AppVeyor: remove MSYS2_ARG_CONV_EXCL for winbuild
     new 68b0aac2f AppVeyor: add 32-bit MinGW-w64 build
     new 69d95b6d4 lib: silence conversion warnings
     new 0f62c9af8 urlapi: fix unused variable warning
     new ac830139d checksrc: fix uninitialized variable warning
     new f0f053fed chunked-encoding: stop hiding the CURLE_BAD_CONTENT_ENCODING 
error
     new c124e6b3c CURLMOPT_MAX_CONCURRENT_STREAMS: new setopt
     new e59371a49 curl: create easy handles on-demand and not ahead of time
     new 54c622aa8 tool_operate: rename functions to make more sense
     new 2c20109a9 urlapi: fix URL encoding when setting a full URL
     new c6f250c4d redirect: when following redirects to an absolute URL, URL 
encode it
     new 475324b27 RELEASE-NOTES: synced
     new 0f48055c4 ESNI: initial build/setup
     new 683102e0a CURLMOPT_MAX_CONCURRENT_STREAMS.3: fix SEE ALSO typo
     new 0b386392d docs: add note on failed handles not being counted by 
curl_multi_perform
     new 13ecc0725 cookie: avoid harmless use after free
     new 02c6b984c urlapi: fix use-after-free bug
     new 8a00560de http2: move state-init from creation to pre-transfer
     new 249541f12 cookies: change argument type for Curl_flush_cookies
     new b902b0632 ngtcp2: adapt to API change
     new 1d7fe8390 winbuild: add ENABLE_UNICODE option
     new f7f0b0012 curl: ensure HTTP 429 triggers --retry
     new 04ab0108a RELEASE-NOTES: synced
     new df85b86a9 build: Remove unused HAVE_LIBSSL and HAVE_LIBCRYPTO defines
     new 8bb3a95ce ldap: fix OOM error on missing query string
     new b905e26b0 docs: added multi-event.c example
     new 637916387 CURLOPT_TIMEOUT.3: remove the mention of "minutes"
     new 67bb7926e TODO: Consult %APPDATA% also for .netrc
     new 93373a960 curl: --no-progress-meter
     new e5594e09f cirrus: Switch the FreeBSD 11.x build to 11.3 and add a 13.0 
build.
     new 9e03faccc docs: document it as --no-progress-meter instead of the 
reverse
     new b1ae7f9b7 docs: make sure the --no-progress-meter docs file is in dist 
too
     new 60fcd3938 cirrus: Increase the git clone depth.
     new b8ea432d6 KNOWN_BUGS: IDN tests failing on Windows
     new b59f0626b tests: use port 2 instead of 60000 for a safer non-listening 
port
     new 5584aa96f cirrus: switch off blackhole status on the freebsd CI 
machines
     new 490effc19 connect: return CURLE_OPERATION_TIMEDOUT for errno == 
ETIMEDOUT
     new 41c69f473 RELEASE-NOTES: synced
     new bc2dbef0a socketpair: an implemention for Windows and more
     new 9c76f694d asyn-thread: make use of Curl_socketpair() where available
     new 1b843bb5e gskit: use the generic Curl_socketpair
     new 622cf7db6 socketpair: fix double-close in error case
     new 0dc14b838 socketpair: fix include and define for older TCP header 
systems
     new 02e608f0b appveyor: add a winbuild that uses VS2017
     new e80b5c801 KNOWN_BUGS: "LDAP on Windows does authentication wrong"
     new a81836a7f KNOWN_BUGS: remove "CURLFORM_CONTENTLEN in an array"
     new 07e987840 TODO: Handle growing SFTP files
     new be16d8d99 connect: silence sign-compare warning
     new a626fa128 security: silence conversion warning
     new ee6383773 smbserver: fix Python 3 compatibility
     new 476eb8817 tests: use proxy feature
     new 347075bc1 tests: line ending fixes for Windows
     new e06204343 url: normalize CURLINFO_EFFECTIVE_URL
     new fe5c2464d tool_operate: Fix retry sleep time shown to user when 
Retry-After
     new ce07f0b8a CURLOPT_TIMEOUT.3: Clarify transfer timeout time includes 
queue time
     new 9016049b3 RELEASE-NOTES: synced
     new fff1ba7a6 test1162: disable MSYS2's POSIX path conversion
     new 700438c55 configure: remove all cyassl references
     new 650677461 examples/sslbackend: fix -Wchar-subscripts warning
     new 1d642f055 travis: Add an ARM64 build
     new 59041f052 http2: expire a timeout at end of stream
     new 95a4cfd88 http2_recv: a closed stream trumps pause state
     new b35fbf526 appveyor: Add MSVC ARM64 build
     new cebbba9f9 runtests: get textaware info from curl instead of perl
     new 2e4405d29 tests: add `connect to non-listen` keywords
     new d81dbae19 tests: use %FILE_PWD for file:// URLs
     new 333e77d39 RELEASE-NOTES: synced
     new 2838fd91b tests: add missing proxy features
     new 807c056c0 conn-reuse: requests wanting NTLM can reuse non-NTLM 
connections
     new 503816250 appveyor: Use two parallel compilation on appveyor with CMake
     new b3378a793 test1591: fix spelling of http feature
     new 8986df802 schannel: reverse the order of certinfo insertions
     new a030d4835 appveyor: make winbuilds with DEBUG=no/yes and VS 2015/2017
     new 0f234a5cd appveyor: add --disable-proxy autotools build
     new e0ee3d9f9 HTTP3: fix Windows build
     new aeafa260c RELEASE-NOTES: synced
     new 9f5b26d23 HTTP3: fix invalid use of sendto for connected UDP socket
     new 37aea3c94 HTTP3: fix typo somehere1 > somewhere1
     new 32cc5ca7a examples: remove the "this exact code has not been verified"
     new 0cbd6f8df url: Curl_free_request_state() should also free doh handles
     new 4011802b3 INSTALL: add missing space for configure commands
     new dcd7e37c3 url: make Curl_close() NULLify the pointer too
     new 8d8b5ec34 appveyor: publish artifacts on appveyor
     new c2b01cce5 gtls: make gnutls_bye() not wait for response on shutdown
     new 9c4982490 schannel_verify: Fix concurrent openings of CA file
     new 9910d6b9a mbedtls: add error message for cert validity starting in the 
future
     new d0319adb0 copyrights: update all copyright notices to 2019 on files 
changed this year
     new 2839cfdc5 certs/Server-localhost-lastSAN-sv: regenerate with sha256
     new 07f898605 configure: only say ipv6 enabled when the variable is set
     new 9367428c7 THANKS: add new names from 7.67.0
     new 2e9b725f6 RELEASE-NOTES: synced
     new 03d326c16 Merge tag 'curl-7_67_0'
     new 38262bc29 awk scripts.
     new 37cfa3723 awk
     new 9108402eb awk
     new 6cdcf074f awk.
     new 84128aed7 rm sed.sh, add man_lint.sh to Makefile.
     new 8cd068292 include.
     new 46bb47bcc name, include.
     new 9ad7b8f91 Makefile.inc
     new 63b81ac1e minor

The 222 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .cirrus.yml                                        |  25 +-
 .github/workflows/cpp.yml                          |  17 +
 .mailmap                                           |   1 +
 .travis.yml                                        |  70 ++-
 CMake/CurlTests.c                                  |   2 +-
 CMake/Platforms/WindowsCache.cmake                 |   1 -
 CMakeLists.txt                                     |   2 -
 README.md                                          |   3 +-
 RELEASE-NOTES                                      | 432 +++++++++++-------
 appveyor.yml                                       | 115 ++++-
 aux-gnurl/.gitignore                               |   1 +
 aux-gnurl/Makefile                                 |  22 +
 aux-gnurl/gnurl0.awk                               |   8 +
 aux-gnurl/gnurl1.awk                               |  10 +
 aux-gnurl/gnurl1.sh                                |  36 ++
 aux-gnurl/man_lint.sh                              |   4 +-
 aux-gnurl/sed.sh                                   |  31 --
 buildconf.bat                                      |   2 +-
 configure.ac                                       | 125 ++---
 docs/BINDINGS.md                                   |  11 +-
 docs/ESNI.md                                       | 139 ++++++
 docs/HTTP3.md                                      |  38 +-
 docs/INSTALL.md                                    |  16 +-
 docs/KNOWN_BUGS                                    |  22 +-
 docs/Makefile.am                                   |   1 +
 docs/THANKS                                        |  42 ++
 docs/THANKS-filter                                 |   2 +-
 docs/TODO                                          |  18 +
 docs/cmdline-opts/Makefile.inc                     |   3 +-
 docs/cmdline-opts/no-progress-meter.d              |  10 +
 docs/examples/Makefile.inc                         |   2 +-
 docs/examples/externalsocket.c                     |   2 +-
 docs/examples/ftp-wildcard.c                       |   2 +-
 docs/examples/htmltidy.c                           |   2 +-
 docs/examples/htmltitle.cpp                        |   2 +-
 docs/examples/http2-upload.c                       |   2 +-
 docs/examples/imap-append.c                        |   2 +-
 docs/examples/multi-app.c                          |   2 +-
 docs/examples/{multi-uv.c => multi-event.c}        |  79 ++--
 docs/examples/multithread.c                        |   2 +-
 docs/examples/postit2-formadd.c                    |   3 +-
 docs/examples/postit2.c                            |   3 +-
 docs/examples/resolve.c                            |   2 +-
 docs/examples/sampleconv.c                         |   2 +-
 docs/examples/sendrecv.c                           |   2 +-
 docs/examples/shared-connection-cache.c            |   2 +-
 docs/examples/smooth-gtk-thread.c                  |   2 +-
 docs/examples/smtp-mime.c                          |   2 +-
 docs/examples/sslbackend.c                         |   4 +-
 docs/examples/synctime.c                           |   2 +-
 docs/examples/threaded-shared-conn.c               |   2 +-
 docs/examples/threaded-ssl.c                       |   2 +-
 docs/libcurl/gnurl_multi_perform.3                 |   4 +-
 docs/libcurl/gnurl_multi_setopt.3                  |   4 +-
 docs/libcurl/gnurl_multi_wait.3                    |   2 +-
 docs/libcurl/gnurl_url_get.3                       |   5 +-
 docs/libcurl/gnurl_url_set.3                       |  11 +-
 docs/libcurl/libgnurl-errors.3                     |   2 +-
 docs/libcurl/libgnurl-multi.3                      |   5 +-
 docs/libcurl/libgnurl-tutorial.3                   |   2 +-
 ...SITE_BL.3 => CURLMOPT_MAX_CONCURRENT_STREAMS.3} |  40 +-
 docs/libcurl/opts/GNURLOPT_CURLU.3                 |   2 +-
 docs/libcurl/opts/GNURLOPT_FOLLOWLOCATION.3        |   2 +-
 docs/libcurl/opts/GNURLOPT_HEADERFUNCTION.3        |   2 +-
 docs/libcurl/opts/GNURLOPT_HEADEROPT.3             |   2 +-
 docs/libcurl/opts/GNURLOPT_HTTP_VERSION.3          |   2 +-
 docs/libcurl/opts/GNURLOPT_LOCALPORT.3             |   2 +-
 docs/libcurl/opts/GNURLOPT_LOCALPORTRANGE.3        |   2 +-
 docs/libcurl/opts/GNURLOPT_PROXY_SSLVERSION.3      |   2 +-
 docs/libcurl/opts/GNURLOPT_PROXY_TLS13_CIPHERS.3   |   2 +-
 docs/libcurl/opts/GNURLOPT_RANGE.3                 |   2 +-
 docs/libcurl/opts/GNURLOPT_REDIR_PROTOCOLS.3       |   2 +-
 docs/libcurl/opts/GNURLOPT_SEEKDATA.3              |   2 +-
 docs/libcurl/opts/GNURLOPT_SSLVERSION.3            |   2 +-
 docs/libcurl/opts/GNURLOPT_TIMEOUT.3               |  19 +-
 docs/libcurl/opts/GNURLOPT_TLS13_CIPHERS.3         |   2 +-
 docs/libcurl/opts/GNURLOPT_TRAILERDATA.3           |  12 +-
 docs/libcurl/opts/GNURLOPT_TRAILERFUNCTION.3       |  14 +-
 docs/libcurl/opts/Makefile.inc                     |   9 +-
 docs/libcurl/symbols-in-versions                   |   3 +
 include/gnurl/curl.h                               |   2 +
 include/gnurl/curlver.h                            |   6 +-
 include/gnurl/multi.h                              |   6 +
 include/gnurl/urlapi.h                             |   2 +
 lib/Makefile.inc                                   |   4 +-
 lib/Makefile.netware                               |   2 -
 lib/altsvc.c                                       |  20 +-
 lib/asyn-thread.c                                  |  28 +-
 lib/checksrc.pl                                    |   2 +-
 lib/config-amigaos.h                               |   4 +-
 lib/config-os400.h                                 |   6 -
 lib/config-plan9.h                                 |   1 -
 lib/config-riscos.h                                |   8 +-
 lib/config-symbian.h                               |   3 -
 lib/config-tpf.h                                   |   6 +-
 lib/config-vxworks.h                               |   3 -
 lib/conncache.c                                    |   8 +-
 lib/connect.c                                      |  15 +-
 lib/cookie.c                                       |  19 +-
 lib/cookie.h                                       |   2 +-
 lib/curl_config.h.cmake                            |   3 -
 lib/doh.c                                          |  58 ++-
 lib/easy.c                                         |   7 +-
 lib/ftp.c                                          | 396 ++++++++--------
 lib/ftp.h                                          |   6 +-
 lib/ftplistparser.c                                |   2 +-
 lib/hostcheck.c                                    |   2 +-
 lib/hostip.c                                       |   2 +-
 lib/http.c                                         |  17 +-
 lib/http.h                                         |   5 -
 lib/http2.c                                        |  49 +-
 lib/http_chunks.c                                  |  28 +-
 lib/http_chunks.h                                  |  13 +-
 lib/http_proxy.c                                   |   9 +-
 lib/imap.c                                         |   5 +-
 lib/ldap.c                                         |  24 +-
 lib/mime.c                                         |  19 +-
 lib/mime.h                                         |   6 +-
 lib/multi.c                                        |  18 +-
 lib/multihandle.h                                  |   1 +
 lib/multiif.h                                      |   6 +
 lib/netrc.c                                        |   2 +-
 lib/non-ascii.c                                    |   2 +-
 lib/parsedate.c                                    |  14 +-
 lib/security.c                                     |   2 +-
 lib/setopt.c                                       |  25 +-
 lib/setup-os400.h                                  |   6 +-
 lib/smb.c                                          |   3 +-
 lib/socketpair.c                                   | 118 +++++
 lib/{strtok.h => socketpair.h}                     |  22 +-
 lib/socks.c                                        |  64 +--
 lib/strcase.c                                      |  86 +++-
 lib/strcase.h                                      |   4 +-
 lib/transfer.c                                     |  14 +-
 lib/url.c                                          |  61 ++-
 lib/url.h                                          |   2 +-
 lib/urlapi.c                                       | 156 ++++---
 lib/urldata.h                                      | 393 ++++++++--------
 lib/vauth/vauth.h                                  |   2 +-
 lib/version.c                                      |  15 +-
 lib/vquic/ngtcp2.c                                 | 103 ++---
 lib/vquic/quiche.c                                 |  33 +-
 lib/vssh/libssh.c                                  |   6 +-
 lib/vssh/libssh2.c                                 |   4 +-
 lib/vtls/gskit.c                                   | 102 +----
 lib/vtls/gtls.c                                    |   6 +-
 lib/vtls/mbedtls.c                                 |   7 +-
 lib/vtls/mesalink.c                                |   7 +-
 lib/vtls/nss.c                                     |   2 +-
 lib/vtls/openssl.c                                 |  32 +-
 lib/vtls/polarssl.c                                |   4 +-
 lib/vtls/schannel.c                                |  12 +-
 lib/vtls/schannel_verify.c                         |   2 +-
 lib/vtls/sectransp.c                               |   6 +-
 lib/vtls/vtls.c                                    |   5 +-
 m4/curl-confopts.m4                                |  38 +-
 packages/OS400/curl.inc.in                         |   2 +
 packages/OS400/os400sys.c                          | 156 ++++---
 scripts/delta                                      |   4 +-
 src/tool_cfgable.h                                 |  19 +-
 src/tool_getparam.c                                |  22 +-
 src/tool_help.c                                    |   3 +
 src/tool_metalink.c                                |  14 +-
 src/tool_metalink.h                                |   3 +
 src/tool_operate.c                                 | 503 ++++++++++++---------
 src/tool_operhlp.c                                 |  24 +-
 src/tool_paramhlp.c                                |  31 +-
 src/tool_setopt.h                                  |   2 +-
 src/tool_urlglob.c                                 |   3 +
 tests/certs/Server-localhost-lastSAN-sv.crl        |  16 +-
 tests/certs/Server-localhost-lastSAN-sv.crt        | 113 ++---
 tests/certs/Server-localhost-lastSAN-sv.csr        |  24 +-
 tests/certs/Server-localhost-lastSAN-sv.der        | Bin 994 -> 994 bytes
 tests/certs/Server-localhost-lastSAN-sv.key        |  50 +-
 tests/certs/Server-localhost-lastSAN-sv.pem        | 163 +++----
 tests/certs/Server-localhost-lastSAN-sv.pub.der    | Bin 294 -> 294 bytes
 tests/certs/Server-localhost-lastSAN-sv.pub.pem    |  14 +-
 tests/data/Makefile.inc                            |  13 +-
 tests/data/test1002                                |   1 +
 tests/data/test1008                                |   1 +
 tests/data/test1010                                |   4 +-
 tests/data/test1016                                |   2 +-
 tests/data/test1017                                |   2 +-
 tests/data/test1018                                |   2 +-
 tests/data/test1019                                |   2 +-
 tests/data/test1020                                |   2 +-
 tests/data/test1021                                |   1 +
 tests/data/test1059                                |   1 +
 tests/data/test1060                                |   1 +
 tests/data/test1061                                |   1 +
 tests/data/test1063                                |   2 +-
 tests/data/test1077                                |   1 +
 tests/data/test1078                                |   3 +
 tests/data/test1087                                |   3 +
 tests/data/test1088                                |   3 +
 tests/data/test1091                                |   3 +-
 tests/data/test1092                                |   1 +
 tests/data/test1098                                |   1 +
 tests/data/test1104                                |   3 +
 tests/data/test1106                                |   1 +
 tests/data/test1136                                |   1 +
 tests/data/test1141                                |   3 +
 tests/data/test1142                                |   3 +
 tests/data/test1149                                |   2 +-
 tests/data/test1150                                |   3 +
 tests/data/test1162                                |   4 +
 tests/data/{test199 => test1166}                   |  34 +-
 tests/data/test1213                                |   3 +
 tests/data/test1214                                |   3 +
 tests/data/test1215                                |   1 +
 tests/data/test1216                                |   3 +
 tests/data/test1218                                |   3 +
 tests/data/test1220                                |   2 +-
 tests/data/test1225                                |   1 -
 tests/data/test1228                                |   3 +
 tests/data/test1230                                |   1 +
 tests/data/test1232                                |   3 +
 tests/data/test1233                                |   1 +
 tests/data/test1241                                |   3 +
 tests/data/test1246                                |   3 +
 tests/data/test1253                                |   3 +
 tests/data/test1254                                |   3 +
 tests/data/test1256                                |   3 +
 tests/data/test1257                                |   3 +
 tests/data/test1287                                |   3 +
 tests/data/test1288                                |   3 +
 tests/data/test1314                                |   3 +
 tests/data/test1319                                |   1 +
 tests/data/test1320                                |   1 +
 tests/data/test1321                                |   1 +
 tests/data/test1329                                |   3 +
 tests/data/test1331                                |   3 +
 tests/data/test1415                                |   3 +
 tests/data/test1421                                |   3 +
 tests/data/test1428                                |   3 +
 tests/data/test143                                 |   3 +-
 tests/data/test1445                                |   2 +-
 tests/data/test1447                                |   1 +
 tests/data/test1455                                |   3 +
 tests/data/test1456                                |   3 +
 tests/data/test1509                                |   4 +-
 tests/data/test1525                                |   3 +
 tests/data/test1526                                |   3 +
 tests/data/test1527                                |   3 +
 tests/data/test1528                                |   3 +
 tests/data/test1529                                |   3 +
 tests/data/test1591                                |   2 +-
 tests/data/test1596                                |   2 +-
 tests/data/test16                                  |   3 +
 tests/data/test162                                 |   1 +
 tests/data/test165                                 |   1 +
 tests/data/test1654                                |   1 +
 tests/data/{test1600 => test1655}                  |   7 +-
 tests/data/test167                                 |   1 +
 tests/data/test168                                 |   1 +
 tests/data/test169                                 |   1 +
 tests/data/test170                                 |   1 +
 tests/data/test171                                 |   3 +
 tests/data/test179                                 |   3 +
 tests/data/test183                                 |   3 +
 tests/data/test184                                 |   3 +
 tests/data/test185                                 |   3 +
 tests/data/test19                                  |   2 +-
 tests/data/test1904                                |   3 +
 tests/data/{test1906 => test1907}                  |  17 +-
 tests/data/test200                                 |   2 +-
 tests/data/test2000                                |   2 +-
 tests/data/test2001                                |   2 +-
 tests/data/test2002                                |   2 +-
 tests/data/test2003                                |   2 +-
 tests/data/test2004                                |   2 +-
 tests/data/test2005                                |   2 +-
 tests/data/test2006                                |   2 +-
 tests/data/test2007                                |   2 +-
 tests/data/test2008                                |   2 +-
 tests/data/test2009                                |   2 +-
 tests/data/test2010                                |   2 +-
 tests/data/test2011                                |   2 +-
 tests/data/test2012                                |   2 +-
 tests/data/test2013                                |   2 +-
 tests/data/test2014                                |   2 +-
 tests/data/test2015                                |   2 +-
 tests/data/test2016                                |   2 +-
 tests/data/test2017                                |   2 +-
 tests/data/test2018                                |   2 +-
 tests/data/test2019                                |   2 +-
 tests/data/test202                                 |   2 +-
 tests/data/test2020                                |   2 +-
 tests/data/test2021                                |   2 +-
 tests/data/test2022                                |   2 +-
 tests/data/test204                                 |   2 +-
 tests/data/test2050                                |   3 +
 tests/data/test2055                                |   4 +-
 tests/data/test2058                                |   1 +
 tests/data/test2059                                |   1 +
 tests/data/test206                                 |   1 +
 tests/data/test2060                                |   1 +
 tests/data/test2071                                |   2 +-
 tests/data/test208                                 |   1 +
 tests/data/test209                                 |   1 +
 tests/data/test213                                 |   1 +
 tests/data/test217                                 |   3 +
 tests/data/test219                                 |   1 +
 tests/data/test231                                 |   2 +-
 tests/data/test233                                 |   3 +
 tests/data/test234                                 |   3 +
 tests/data/test239                                 |   1 +
 tests/data/test243                                 |   1 +
 tests/data/test244                                 |   2 +-
 tests/data/test256                                 |   3 +
 tests/data/test257                                 |   4 +-
 tests/data/test258                                 |   1 +
 tests/data/test259                                 |   1 +
 tests/data/test263                                 |   1 +
 tests/data/test264                                 |   3 +
 tests/data/test265                                 |   1 +
 tests/data/test275                                 |   3 +
 tests/data/test278                                 |   3 +
 tests/data/test279                                 |   3 +
 tests/data/test287                                 |   3 +
 tests/data/test288                                 |   2 +-
 tests/data/test299                                 |   1 +
 tests/data/test317                                 |   3 +
 tests/data/test318                                 |   3 +
 tests/data/test330                                 |   3 +
 tests/data/test331                                 |   3 +
 tests/data/test335                                 |   1 +
 tests/data/{test105 => test336}                    |  17 +-
 tests/data/{test1137 => test337}                   |  18 +-
 tests/data/{test199 => test338}                    |  16 +-
 tests/data/test356                                 |   2 +-
 tests/data/test43                                  |   3 +
 tests/data/test5                                   |   3 +
 tests/data/test503                                 |   4 +-
 tests/data/test504                                 |   2 +
 tests/data/test506                                 | 118 ++---
 tests/data/test523                                 |   3 +
 tests/data/test539                                 |   2 +-
 tests/data/test540                                 |   1 +
 tests/data/test547                                 |   1 +
 tests/data/test548                                 |   1 +
 tests/data/test549                                 |   1 +
 tests/data/test550                                 |   1 +
 tests/data/test551                                 |   1 +
 tests/data/test555                                 |   1 +
 tests/data/test561                                 |   1 +
 tests/data/test563                                 |   4 +-
 tests/data/test564                                 |   3 +
 tests/data/test590                                 |   1 +
 tests/data/test62                                  |   4 +-
 tests/data/test63                                  |   3 +
 tests/data/test659                                 |   3 +
 tests/data/{test520 => test661}                    |  42 +-
 tests/data/{test1011 => test662}                   |  38 +-
 tests/data/test663                                 |  82 ++++
 tests/data/test702                                 |   1 +
 tests/data/test703                                 |   1 +
 tests/data/test704                                 |   5 +-
 tests/data/test705                                 |   5 +-
 tests/data/test714                                 |   1 +
 tests/data/test715                                 |   1 +
 tests/data/test716                                 |   1 +
 tests/data/test717                                 |   3 +
 tests/data/test79                                  |   1 +
 tests/data/test80                                  |   3 +
 tests/data/test81                                  |   1 +
 tests/data/test82                                  |   1 +
 tests/data/test83                                  |   3 +
 tests/data/test84                                  |   3 +
 tests/data/test85                                  |   3 +
 tests/data/test93                                  |   3 +
 tests/data/test94                                  |   1 +
 tests/data/test95                                  |   3 +
 tests/libtest/Makefile.inc                         |  11 +-
 tests/libtest/lib1156.c                            |   2 +-
 tests/libtest/lib1522.c                            |   2 +-
 tests/libtest/lib1560.c                            |  43 ++
 tests/libtest/{lib1906.c => lib1907.c}             |  30 +-
 tests/libtest/lib506.c                             |   3 +-
 tests/libtest/lib509.c                             |   2 +-
 tests/libtest/lib541.c                             |   2 +-
 tests/libtest/lib557.c                             |   2 +-
 tests/libtest/lib569.c                             |   2 +-
 tests/libtest/lib571.c                             |   2 +-
 tests/libtest/lib661.c                             | 150 ++++++
 tests/manpage-scan.pl                              |   4 +-
 tests/runtests.pl                                  |   2 +-
 tests/server/.gitignore                            |   1 +
 tests/server/util.c                                |   2 +-
 tests/smbserver.py.in                              |   7 +-
 tests/unit/CMakeLists.txt                          |   1 +
 tests/unit/Makefile.inc                            |   6 +-
 tests/unit/README                                  |   6 +-
 tests/unit/unit1303.c                              |   6 +-
 tests/unit/unit1307.c                              |   2 +-
 tests/unit/unit1399.c                              |   4 +-
 tests/unit/unit1620.c                              |   4 +-
 tests/unit/unit1654.c                              |  13 +-
 tests/unit/unit1655.c                              | 113 +++++
 winbuild/Makefile.vc                               |  10 +
 winbuild/MakefileBuild.vc                          |  26 +-
 401 files changed, 3891 insertions(+), 2214 deletions(-)
 create mode 100644 .github/workflows/cpp.yml
 create mode 100644 aux-gnurl/.gitignore
 create mode 100644 aux-gnurl/Makefile
 create mode 100755 aux-gnurl/gnurl0.awk
 create mode 100755 aux-gnurl/gnurl1.awk
 create mode 100644 aux-gnurl/gnurl1.sh
 delete mode 100755 aux-gnurl/sed.sh
 create mode 100644 docs/ESNI.md
 create mode 100644 docs/cmdline-opts/no-progress-meter.d
 copy docs/examples/{multi-uv.c => multi-event.c} (78%)
 copy docs/libcurl/opts/{GNURLMOPT_PIPELINING_SITE_BL.3 => 
CURLMOPT_MAX_CONCURRENT_STREAMS.3} (60%)
 create mode 100644 lib/socketpair.c
 copy lib/{strtok.h => socketpair.h} (68%)
 copy tests/data/{test199 => test1166} (58%)
 copy tests/data/{test1600 => test1655} (83%)
 copy tests/data/{test1906 => test1907} (67%)
 copy tests/data/{test105 => test336} (68%)
 copy tests/data/{test1137 => test337} (66%)
 copy tests/data/{test199 => test338} (69%)
 copy tests/data/{test520 => test661} (53%)
 copy tests/data/{test1011 => test662} (53%)
 create mode 100644 tests/data/test663
 copy tests/libtest/{lib1906.c => lib1907.c} (64%)
 create mode 100644 tests/libtest/lib661.c
 create mode 100644 tests/unit/unit1655.c

diff --git a/.cirrus.yml b/.cirrus.yml
index 21d7b62ab..dc7e2299a 100644
--- a/.cirrus.yml
+++ b/.cirrus.yml
@@ -5,12 +5,15 @@ task:
   name: FreeBSD
   freebsd_instance:
     matrix:
-      image: freebsd-12-0-release-amd64
-      image: freebsd-11-2-release-amd64
-      image: freebsd-10-4-release-amd64
+      # There isn't a stable 13.0 image yet (2019-10)
+      image_family: freebsd-13-0-snap
+      image_family: freebsd-12-0
+      # The stable 11.3 image causes "Agent is not responding" so use a 
snapshot
+      image_family: freebsd-11-3-snap
+      image_family: freebsd-10-4
 
   env:
-    CIRRUS_CLONE_DEPTH: 1
+    CIRRUS_CLONE_DEPTH: 10
     MAKE_FLAGS: -j 2
 
   pkginstall_script:
@@ -22,15 +25,23 @@ task:
   compile_script:
     - make V=1
   test_script:
+    # blackhole?
+    - sysctl net.inet.tcp.blackhole
+    # make sure we don't run blackhole != 0
+    - sudo sysctl net.inet.tcp.blackhole=0
     # Some tests won't run if run as root so run them as another user.
     # Make directories world writable so the test step can write wherever it 
needs.
     - find . -type d -exec chmod 777 {} \;
     # TODO: A number of tests are failing on different FreeBSD versions and so
     # are disabled.  This should be investigated.
     - SKIP_TESTS=''
-    - if [ `uname -r` = "12.0-RELEASE" ] ; then SKIP_TESTS='!303 !304 !323 
!504 !1242 !1243 !2002 !2003'; fi
-    - if [ `uname -r` = "11.2-RELEASE" ] ; then SKIP_TESTS='!303 !304 !310 
!311 !312 !313 !504 !1082 !1242 !1243 !2002 !2003 !2034 !2035 !2037 !2038 !2041 
!2042 !2048 !3000 !3001'; fi
-    - if [ `uname -r` = "10.4-RELEASE" ] ; then SKIP_TESTS='!303 !304 !310 
!311 !312 !313 !504 !1082 !1242 !1243 !2002 !2003 !2034 !2035 !2037 !2038 !2041 
!2042 !2048 !3000 !3001'; fi
+    - uname -r
+    - case `uname -r` in
+        13.0*) SKIP_TESTS='!303 !304 !323 !504 !1242 !1243 !2002 !2003';;
+        12.0*) SKIP_TESTS='!303 !304 !323 !504 !1242 !1243 !2002 !2003';;
+        11.3*) SKIP_TESTS='!303 !304 !504 !1242 !1243 !2002 !2003';;
+        10.4*) SKIP_TESTS='!303 !304 !310 !311 !312 !313 !504 !1082 !1242 
!1243 !2002 !2003 !2034 !2035 !2037 !2038 !2041 !2042 !2048 !3000 !3001';;
+      esac
     - sudo -u nobody make V=1 TFLAGS="-n -a -p !flaky ${SKIP_TESTS}" 
test-nonflaky
   install_script:
     - make V=1 install
diff --git a/.github/workflows/cpp.yml b/.github/workflows/cpp.yml
new file mode 100644
index 000000000..9f3b87eb5
--- /dev/null
+++ b/.github/workflows/cpp.yml
@@ -0,0 +1,17 @@
+name: Build on Ubuntu with default options
+
+on: [push]
+
+jobs:
+  build:
+
+    runs-on: ubuntu-latest
+    
+    steps:
+    - uses: actions/checkout@v1
+    - name: configure
+      run: ./buildconf && ./configure
+    - name: make
+      run: make
+    - name: make check
+      run: make check
diff --git a/.mailmap b/.mailmap
index 7e67800c2..e38055f78 100644
--- a/.mailmap
+++ b/.mailmap
@@ -53,3 +53,4 @@ Peter Pih <address@hidden>
 Anton Malov <address@hidden>
 Marquis de Muesli <address@hidden>
 Kyohei Kadota <address@hidden>
+Lucas Pardue <address@hidden> <address@hidden>
diff --git a/.travis.yml b/.travis.yml
index b468c6f61..3c4fb43e5 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -86,9 +86,32 @@ matrix:
         - os: linux
           compiler: gcc
           dist: xenial
+          before_install:
+              # Install and use the current stable release of Go
+              - gimme --list
+              - eval "$(gimme stable)"
+              - gimme --list
           env:
               - T=novalgrind BORINGSSL=yes C="--with-ssl=$HOME/boringssl" 
LD_LIBRARY_PATH=/home/travis/boringssl/lib:/usr/local/lib
               - OVERRIDE_CC="CC=gcc-8" OVERRIDE_CXX="CXX=g++-8"
+          addons:
+              apt:
+                  sources:
+                      - ppa:longsleep/golang-backports
+                      - *common_sources
+                  packages:
+                      - *common_packages
+        - os: linux
+          compiler: gcc
+          dist: xenial
+          before_install:
+              # Install and use the current stable release of Go
+              - gimme --list
+              - eval "$(gimme stable)"
+              - gimme --list
+          env:
+              - T=novalgrind BORINGSSL=yes QUICHE="yes" 
C="--with-ssl=$HOME/boringssl --with-quiche=$HOME/quiche/target/release 
--enable-alt-svc" 
LD_LIBRARY_PATH=/home/travis/boringssl/lib:$HOME/quiche/target/release:/usr/local/lib
+              - OVERRIDE_CC="CC=gcc-8" OVERRIDE_CXX="CXX=g++-8"
           addons:
               apt:
                   sources:
@@ -101,7 +124,7 @@ matrix:
           compiler: gcc
           dist: xenial
           env:
-              - T=novalgrind BORINGSSL=yes QUICHE="yes" 
C="--with-ssl=$HOME/boringssl --with-quiche=$HOME/quiche/target/release 
--enable-alt-svc" 
LD_LIBRARY_PATH=/home/travis/boringssl/lib:$HOME/quiche/target/release:/usr/local/lib
+              - T=novalgrind NGTCP2=yes C="--with-ssl=$HOME/ngbuild 
--with-ngtcp2=$HOME/ngbuild --with-nghttp3=$HOME/ngbuild --enable-alt-svc" 
NOTESTS=
               - OVERRIDE_CC="CC=gcc-8" OVERRIDE_CXX="CXX=g++-8"
           addons:
               apt:
@@ -111,20 +134,6 @@ matrix:
                       - *common_packages
                       - libpsl-dev
                       - libbrotli-dev
-        #- os: linux
-        #  compiler: gcc
-        #  dist: xenial
-        #  env:
-        #      - T=novalgrind NGTCP2=yes C="--with-ssl=$HOME/ngbuild 
--with-ngtcp2=$HOME/ngbuild --with-nghttp3=$HOME/ngbuild --enable-alt-svc" 
NOTESTS=
-        #      - OVERRIDE_CC="CC=gcc-8" OVERRIDE_CXX="CXX=g++-8"
-        #  addons:
-        #      apt:
-        #          sources:
-        #              - *common_sources
-        #          packages:
-        #              - *common_packages
-        #              - libpsl-dev
-        #              - libbrotli-dev
         - os: linux
           compiler: gcc
           dist: xenial
@@ -408,6 +417,26 @@ matrix:
                       - clang-7
                       - libpsl-dev
                       - libbrotli-dev
+        - os: linux
+          arch: arm64
+          compiler: gcc
+          dist: xenial
+          env:
+              - T=debug C="--enable-alt-svc"
+              - OVERRIDE_CC="CC=gcc-8" OVERRIDE_CXX="CXX=g++-8"
+          addons:
+              apt:
+                  sources:
+                      - *common_sources
+                  packages:
+                      - *common_packages
+                      - libpsl-dev
+                      - libbrotli-dev
+                      - libev-dev
+                      - libssl-dev
+                      - libtool
+                      - pkg-config
+                      - zlib1g-dev
 
 before_install:
     - eval "${OVERRIDE_CC}"
@@ -424,7 +453,7 @@ before_script:
     - |
       if [ "$NGTCP2" = yes ]; then
        (cd $HOME &&
-       git clone --depth 1 -b openssl-quic-draft-22 
https://github.com/tatsuhiro-t/openssl possl &&
+       git clone --depth 1 -b openssl-quic-draft-23 
https://github.com/tatsuhiro-t/openssl possl &&
        cd possl &&
        ./config enable-tls1_3 --prefix=$HOME/ngbuild &&
        make && make install_sw &&
@@ -437,7 +466,7 @@ before_script:
        make && make install &&
 
        cd .. &&
-       git clone --depth 1 -b draft-22 https://github.com/ngtcp2/ngtcp2 &&
+       git clone --depth 1 https://github.com/ngtcp2/ngtcp2 &&
        cd ngtcp2 &&
        autoreconf -i &&
        ./configure PKG_CONFIG_PATH=$HOME/ngbuild/lib/pkgconfig 
LDFLAGS="-Wl,-rpath,$HOME/ngbuild/lib" --prefix=$HOME/ngbuild &&
@@ -537,7 +566,12 @@ script:
              ./configure --enable-debug --enable-werror $C
              make && make examples
              if [ -z $NOTESTS ]; then
-                make TFLAGS=-n test-nonflaky
+                if [ "$TRAVIS_ARCH" = "aarch64" ] ; then
+                    # TODO: find out why this test is failing on arm64
+                    make "TFLAGS=-n !323" test-nonflaky
+                else
+                    make TFLAGS=-n test-nonflaky
+                fi
              fi
         fi
     - |
diff --git a/CMake/CurlTests.c b/CMake/CurlTests.c
index 2a7632951..3ef35f025 100644
--- a/CMake/CurlTests.c
+++ b/CMake/CurlTests.c
@@ -5,7 +5,7 @@
  *                            | (__| |_| |  _ <| |___
  *                             \___|\___/|_| \_\_____|
  *
- * Copyright (C) 1998 - 2014, Daniel Stenberg, <address@hidden>, et al.
+ * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
  *
  * This software is licensed as described in the file COPYING, which
  * you should have received as part of this distribution. The terms
diff --git a/CMake/Platforms/WindowsCache.cmake 
b/CMake/Platforms/WindowsCache.cmake
index cafaec216..ead4115a3 100644
--- a/CMake/Platforms/WindowsCache.cmake
+++ b/CMake/Platforms/WindowsCache.cmake
@@ -7,7 +7,6 @@ if(NOT UNIX)
     set(HAVE_LIBNSL 0)
     set(HAVE_GETHOSTNAME 1)
     set(HAVE_LIBZ 0)
-    set(HAVE_LIBCRYPTO 0)
 
     set(HAVE_DLOPEN 0)
 
diff --git a/CMakeLists.txt b/CMakeLists.txt
index 27f99295d..66d771087 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -347,8 +347,6 @@ if(CMAKE_USE_OPENSSL)
   find_package(OpenSSL REQUIRED)
   set(SSL_ENABLED ON)
   set(USE_OPENSSL ON)
-  set(HAVE_LIBCRYPTO ON)
-  set(HAVE_LIBSSL ON)
 
   # Depend on OpenSSL via imported targets if supported by the running
   # version of CMake.  This allows our dependents to get our dependencies
diff --git a/README.md b/README.md
index 80c7ea85b..480b53bab 100644
--- a/README.md
+++ b/README.md
@@ -10,6 +10,7 @@
 [![Sponsors on Open 
Collective](https://opencollective.com/curl/sponsors/badge.svg)](#sponsors)
 [![Language Grade: 
C/C++](https://img.shields.io/lgtm/grade/cpp/g/curl/curl.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/curl/curl/context:cpp)
 [![Codacy 
Badge](https://api.codacy.com/project/badge/Grade/d11483a0cc5c4ebd9da4ff9f7cd56690)](https://www.codacy.com/app/curl/curl?utm_source=github.com&amp;utm_medium=referral&amp;utm_content=curl/curl&amp;utm_campaign=Badge_Grade)
+[![Fuzzing 
Status](https://oss-fuzz-build-logs.storage.googleapis.com/badges/curl.svg)](https://bugs.chromium.org/p/oss-fuzz/issues/list?sort=-opened&can=1&q=proj:curl)
 
 Curl is a command-line tool for transferring data specified with URL
 syntax. Find out how to use curl by reading [the curl.1 man
@@ -21,7 +22,7 @@ libcurl is the library curl is using to do its job. It is 
readily available to
 be used by your software. Read [the libcurl.3 man
 page](https://curl.haxx.se/libcurl/c/libcurl.html) to learn how!
 
-You find answers to the most frequent questions we get in [the FAQ
+You can find answers to the most frequent questions we get in [the FAQ
 document](https://curl.haxx.se/docs/faq.html).
 
 Study [the COPYING file](https://curl.haxx.se/docs/copyright.html) for
diff --git a/RELEASE-NOTES b/RELEASE-NOTES
index cd13827f5..cea2debda 100644
--- a/RELEASE-NOTES
+++ b/RELEASE-NOTES
@@ -1,103 +1,144 @@
-curl and libcurl 7.66.0
+curl and libcurl 7.67.0
 
- Public curl releases:         185
- Command line options:         225
+ Public curl releases:         186
+ Command line options:         226
  curl_easy_setopt() options:   269
  Public functions in libcurl:  81
- Contributors:                 1991
+ Contributors:                 2056
 
 This release includes the following changes:
 
- o CURLINFO_RETRY_AFTER: parse the Retry-After header value [35]
- o HTTP3: initial (experimental still not working) support [5]
- o curl: --sasl-authzid added to support CURLOPT_SASL_AUTHZID from the tool 
[27]
- o curl: support parallel transfers with -Z [4]
- o curl_multi_poll: a sister to curl_multi_wait() that waits more [28]
- o sasl: Implement SASL authorisation identity via CURLOPT_SASL_AUTHZID [27]
+ o curl: added --no-progress-meter [73]
+ o setopt: CURLMOPT_MAX_CONCURRENT_STREAMS is new [55]
+ o urlapi: CURLU_NO_AUTHORITY allows empty authority/host part [22]
 
 This release includes the following bugfixes:
 
- o CVE-2019-5481: FTP-KRB double-free [64]
- o CVE-2019-5482: TFTP small blocksize heap buffer overflow [65]
- o CI: remove duplicate configure flag for LGTM.com
- o CMake: remove needless newlines at end of gss variables
- o CMake: use platform dependent name for dlopen() library [62]
- o CURLINFO docs: mention that in redirects times are added [55]
- o CURLOPT_ALTSVC.3: use a "" file name to not load from a file
- o CURLOPT_ALTSVC_CTRL.3: remove CURLALTSVC_ALTUSED
- o CURLOPT_HEADERFUNCTION.3: clarify [54]
- o CURLOPT_HTTP_VERSION: seting this to 3 forces HTTP/3 use directly [33]
- o CURLOPT_READFUNCTION.3: provide inline example
- o CURLOPT_SSL_VERIFYHOST: treat the value 1 as 2 [51]
- o Curl_addr2string: take an addrlen argument too [61]
- o Curl_fillreadbuffer: avoid double-free trailer buf on error [66]
- o HTTP: use chunked Transfer-Encoding for HTTP_POST if size unknown [10]
- o alt-svc: add protocol version selection masking [31]
- o alt-svc: fix removal of expired cache entry [30]
- o alt-svc: make it use h3-22 with ngtcp2 as well
- o alt-svc: more liberal ALPN name parsing [17]
- o alt-svc: send Alt-Used: in redirected requests [32]
- o alt-svc: with quiche, use the quiche h3 alpn string [16]
- o appveyor: pass on -k to make
- o asyn-thread: create a socketpair to wait on [14]
- o build-openssl: fix build with Visual Studio 2019 [45]
- o cleanup: move functions out of url.c and make them static [58]
- o cleanup: remove the 'numsocks' argument used in many places [25]
- o configure: avoid undefined check_for_ca_bundle [37]
- o curl.h: add CURL_HTTP_VERSION_3 to the version enum
- o curl.h: fix outdated comment [23]
- o curl: cap the maximum allowed values for retry time arguments [13]
- o curl: handle a libcurl build without netrc support [63]
- o curl: make use of CURLINFO_RETRY_AFTER when retrying [35]
- o curl: remove outdated comment [24]
- o curl: use .curlrc (with a dot) on Windows [52]
- o curl: use CURLINFO_PROTOCOL to check for HTTP(s)
- o curl_global_init_mem.3: mention it was added in 7.12.0
- o curl_version: bump string buffer size to 250
- o curl_version_info.3: mentioned ALTSVC and HTTP3
- o curl_version_info: offer quic (and h3) library info [38]
- o curl_version_info: provide nghttp2 details [2]
- o defines: avoid underscore-prefixed defines [47]
- o docs/ALTSVC: remove what works and the experimental explanation [34]
- o docs/EXPERIMENTAL: explain what it means and what's experimental now
- o docs/MANUAL.md: converted to markdown from plain text [3]
- o docs/examples/curlx: fix errors [48]
- o docs: s/curl_debug/curl_dbg_debug in comments and docs [36]
- o easy: resize receive buffer on easy handle reset [9]
- o examples: Avoid reserved names in hiperfifo examples [8]
- o examples: add http3.c, altsvc.c and http3-present.c [40]
- o getenv: support up to 4K environment variable contents on windows [21]
- o http09: disable HTTP/0.9 by default in both tool and library [29]
- o http2: when marked for closure and wanted to close == OK [56]
- o http2_recv: trigger another read when the last data is returned [11]
- o http: fix use of credentials from URL when using HTTP proxy [44]
- o http_negotiate: improve handling of gss_init_sec_context() failures [18]
- o md4: Use our own MD4 when no crypto libraries are available [15]
- o multi: call detach_connection before Curl_disconnect [6]
- o netrc: make the code try ".netrc" on Windows [52]
- o nss: use TLSv1.3 as default if supported [39]
- o openssl: build warning free with boringssl [50]
- o openssl: use SSL_CTX_set_<min|max>_proto_version() when available [68]
- o plan9: add support for running on Plan 9 [22]
- o progress: reset download/uploaded counter between transfers [12]
- o readwrite_data: repair setting the TIMER_STARTTRANSFER stamp [26]
- o scp: fix directory name length used in memcpy [46]
- o smb: init *msg to NULL in smb_send_and_recv() [60]
- o smtp: check for and bail out on too short EHLO response [59]
- o source: remove names from source comments [1]
- o spnego_sspi: add typecast to fix build warning [49]
- o src/makefile: fix uncompressed hugehelp.c generation [19]
- o ssh-libssh: do not specify O_APPEND when not in append mode [7]
- o ssh: move code into vssh for SSH backends [53]
- o sspi: fix memory leaks [67]
- o tests: Replace outdated test case numbering documentation [43]
- o tftp: return error when packet is too small for options
- o timediff: make it 64 bit (if possible) even with 32 bit time_t [20]
- o travis: reduce number of torture tests in 'coverage' [42]
- o url: make use of new HTTP version if alt-svc has one [16]
- o urlapi: verify the IPv6 numerical address [69]
- o urldata: avoid 'generic', use dedicated pointers [57]
- o vauth: Use CURLE_AUTH_ERROR for auth function errors [41]
+ o BINDINGS: five new bindings addded
+ o CURLOPT_TIMEOUT.3: Clarify transfer timeout time includes queue time [78]
+ o CURLOPT_TIMEOUT.3: remove the mention of "minutes" [74]
+ o ESNI: initial build/setup support [71]
+ o FTP: FTPFILE_NOCWD: avoid redundant CWDs [28]
+ o FTP: allow "rubbish" prepended to the SIZE response [15]
+ o FTP: remove trailing slash from path for LIST/MLSD [6]
+ o FTP: skip CWD to entry dir when target is absolute [16]
+ o FTP: url-decode path before evaluation [36]
+ o HTTP3.md: move -p for mkdir, remove -j for make [46]
+ o HTTP3: fix invalid use of sendto for connected UDP socket [109]
+ o HTTP3: fix ngtcp2 Windows build [93]
+ o HTTP3: fix prefix parameter for ngtcp2 build [40]
+ o HTTP3: fix typo somehere1 > somewhere1 [108]
+ o HTTP3: show an --alt-svc using example too
+ o INSTALL: add missing space for configure commands [106]
+ o INSTALL: add vcpkg installation instructions [35]
+ o README: minor grammar fix [39]
+ o altsvc: accept quoted ma and persist values [60]
+ o altsvc: both backends run h3-23 now [31]
+ o appveyor: Add MSVC ARM64 build [87]
+ o appveyor: Use two parallel compilation on appveyor with CMake [98]
+ o appveyor: add --disable-proxy autotools build [94]
+ o appveyor: add 32-bit MinGW-w64 build [58]
+ o appveyor: add a winbuild [14]
+ o appveyor: add a winbuild that uses VS2017 [84]
+ o appveyor: make winbuilds with DEBUG=no/yes and VS 2015/2017 [95]
+ o appveyor: publish artifacts on appveyor [105]
+ o appveyor: upgrade VS2017 to VS2019 [29]
+ o asyn-thread: make use of Curl_socketpair() where available [85]
+ o asyn-thread: s/AF_LOCAL/AF_UNIX for Solaris [3]
+ o build: Remove unused HAVE_LIBSSL and HAVE_LIBCRYPTO defines [77]
+ o checksrc: fix uninitialized variable warning [57]
+ o chunked-encoding: stop hiding the CURLE_BAD_CONTENT_ENCODING error [56]
+ o cirrus: Increase the git clone depth
+ o cirrus: Switch the FreeBSD 11.x build to 11.3 and add a 13.0 build
+ o cirrus: switch off blackhole status on the freebsd CI machines [72]
+ o cleanups: 21 various PVS-Studio warnings [24]
+ o configure: only say ipv6 enabled when the variable is set [110]
+ o configure: remove all cyassl references [90]
+ o conn-reuse: requests wanting NTLM can reuse non-NTLM connections [99]
+ o connect: return CURLE_OPERATION_TIMEDOUT for errno == ETIMEDOUT [72]
+ o connect: silence sign-compare warning [83]
+ o cookie: avoid harmless use after free [69]
+ o cookie: pass in the correct cookie amount to qsort() [27]
+ o cookies: change argument type for Curl_flush_cookies [67]
+ o cookies: using a share with cookies shouldn't enable the cookie engine [63]
+ o copyrights: update copyright notices to 2019 [101]
+ o curl: create easy handles on-demand and not ahead of time [54]
+ o curl: ensure HTTP 429 triggers --retry [64]
+ o curl: exit the create_transfers loop on errors [33]
+ o curl: fix memory leaked by parse_metalink() [17]
+ o curl: load large files with -d @ much faster [19]
+ o docs/HTTP3: fix `--with-ssl` ngtcp2 configure flag [21]
+ o docs: added multi-event.c example [75]
+ o docs: disambiguate CURLUPART_HOST is for host name (ie no port) [62]
+ o docs: note on failed handles not being counted by curl_multi_perform [70]
+ o doh: allow only http and https in debug mode [48]
+ o doh: avoid truncating DNS QTYPE to lower octet [23]
+ o doh: clean up dangling DOH memory on easy close [9]
+ o doh: fix (harmless) buffer overrun [13]
+ o doh: fix undefined behaviour and open up for gcc and clang optimization [12]
+ o doh: return early if there is no time left [48]
+ o examples/sslbackend: fix -Wchar-subscripts warning [89]
+ o examples: remove the "this exact code has not been verified"
+ o git: add tests/server/disabled to .gitignore [59]
+ o gnutls: make gnutls_bye() not wait for response on shutdown [104]
+ o http2: expire a timeout at end of stream [88]
+ o http2: prevent dup'ed handles to send dummy PRIORITY frames [68]
+ o http2: relax verification of :authority in push promise requests [8]
+ o http2_recv: a closed stream trumps pause state [88]
+ o http: lowercase headernames for HTTP/2 and HTTP/3 [49]
+ o ldap: Stop using wide char version of ldapp_err2string [1]
+ o ldap: fix OOM error on missing query string [76]
+ o mbedtls: add error message for cert validity starting in the future [102]
+ o mime: when disabled, avoid C99 macro [7]
+ o ngtcp2: adapt to API change [66]
+ o ngtcp2: compile with latest ngtcp2 + nghttp3 draft-23 [25]
+ o ngtcp2: remove fprintf() calls [43]
+ o openssl: close_notify on the FTP data connection doesn't mean closure [20]
+ o openssl: fix compiler warning with LibreSSL [34]
+ o openssl: use strerror on SSL_ERROR_SYSCALL [41]
+ o os400: getpeername() and getsockname() return ebcdic AF_UNIX sockaddr [47]
+ o parsedate: fix date parsing disabled builds [18]
+ o quiche: don't close connection at end of stream
+ o quiche: persist connection details (fixes -I with --http3) [11]
+ o quiche: set 'drain' when returning without having drained the queues
+ o quiche: update HTTP/3 config creation to new API [61]
+ o redirect: handle redirects to absolute URLs containing spaces [52]
+ o runtests: get textaware info from curl instead of perl [86]
+ o schannel: reverse the order of certinfo insertions [96]
+ o schannel_verify: Fix concurrent openings of CA file [103]
+ o security: silence conversion warning [83]
+ o setopt: handle ALTSVC set to NULL
+ o setopt: make it easier to add new enum values [4]
+ o setopt: store CURLOPT_RTSP_SERVER_CSEQ correctly [24]
+ o smb: check for full size message before reading message details [10]
+ o smbserver: fix Python 3 compatibility [82]
+ o socks: Fix destination host shown on SOCKS5 error [32]
+ o test1162: disable MSYS2's POSIX path conversion
+ o test1591: fix spelling of http feature [97]
+ o tests: add `connect to non-listen` keywords [91]
+ o tests: fix narrowing conversion warnings [37]
+ o tests: fix the test 3001 cert failures [100]
+ o tests: makes tests succeed when using --disable-proxy [81]
+ o tests: use %FILE_PWD for file:// URLs [92]
+ o tests: use port 2 instead of 60000 for a safer non-listening port [72]
+ o tool_operate: Fix retry sleep time shown to user when Retry-After [79]
+ o travis: Add an ARM64 build
+ o url: Curl_free_request_state() should also free doh handles [107]
+ o url: don't set appconnect time for non-ssl/non-ssh connections [42]
+ o url: fix the NULL hostname compiler warning [44]
+ o url: normalize CURLINFO_EFFECTIVE_URL [80]
+ o url: only reuse TLS connections with matching pinning [5]
+ o urlapi: avoid index underflow for short ipv6 hostnames [26]
+ o urlapi: fix URL encoding when setting a full URL [53]
+ o urlapi: fix unused variable warning [57]
+ o urlapi: question mark within fragment is still fragment [45]
+ o urldata: use 'bool' for the bit type on MSVC compilers [30]
+ o vtls: Fix comment typo about macosx-version-min compiler flag [38]
+ o vtls: fix narrowing conversion warnings [50]
+ o winbuild/MakefileBuild.vc: Add vssh [2]
+ o winbuild/MakefileBuild.vc: Fix line endings
+ o winbuild: Add manifest to curl.exe for proper OS version detection [51]
+ o winbuild: add ENABLE_UNICODE option [65]
 
 This release includes the following known bugs:
 
@@ -106,89 +147,136 @@ This release includes the following known bugs:
 This release would not have looked like this without help, code, reports and
 advice from friends like these:
 
-  Alessandro Ghedini, Alex Mayorga, Amit Katyal, Balazs Kovacsics,
-  Brad Spencer, Brandon Dong, Carlo Marcelo Arenas Belón, Christopher Head,
-  Clément Notin, codesniffer13 on github, Daniel Gustafsson, Daniel Stenberg,
-  Dominik Hölzl, Eric Wong, Felix Hädicke, Gergely Nagy, Gisle Vanem,
-  Igor Makarov, Ironbars13 on github, Jason Lee, Jeremy Lainé,
-  Jonathan Cardoso Machado, Junho Choi, Kamil Dudka, Kyle Abramowitz,
-  Kyohei Kadota, Lance Ware, Marcel Raad, Max Dymond, Michael Lee,
-  Michal ÄŒaplygin, migueljcrum on github, Mike Crowe, niallor on github,
-  osabc on github, patnyb on github, Patrick Monnerat, Peter Wu, Ray Satiro,
-  Rolf Eike Beer, Steve Holme, Tatsuhiro Tsujikawa, The Infinnovation team,
-  Thomas Vegas, Tom van der Woerdt, Yiming Jing,
-  (46 contributors)
+  Alessandro Ghedini, Alex Konev, Alex Samorukov, Andrei Valeriu BICA,
+  Barry Pollard, Bastien Bouclet, Bernhard Walle, Bylon2 on github,
+  Christophe Dervieux, Christoph M. Becker, Dagobert Michelsen, Dan Fandrich,
+  Daniel Silverstone, Daniel Stenberg, Denis Chaplygin, Emil Engler,
+  Francois Rivard, George Liu, Gilles Vollant, Griffin Downs, Harry Sintonen,
+  Ilya Kosarev, infinnovation-dev on github, Jacob Barthelmeh, Javier Blazquez,
+  Jens Finkhaeuser, Jeremy Lainé, Jeroen Ooms, Jimmy Gaussen, Joel Depooter,
+  Jojojov on github, jzinn on github, Kamil Dudka, Kunal Ekawde, Lucas Pardue,
+  Lucas Severo, Marcel Hernandez, Marcel Raad, Martin Gartner, Max Dymond,
+  Michael Kaufmann, Michał Janiszewski, momala454 on github,
+  Nathaniel J. Smith, Niall O'Reilly, nico-abram on github,
+  Nikos Mavrogiannopoulos, Patrick Monnerat, Paul B. Omta, Paul Dreik,
+  Peter Sumatra, Philippe Marguinaud, Piotr Komborski, Ray Satiro,
+  Richard Alcock, Roland Hieber, Samuel Surtees, Sebastian Haglund,
+  Spezifant on github, Stian Soiland-Reyes, SumatraPeter on github,
+  Tatsuhiro Tsujikawa, Tom van der Woerdt, Trivikram Kamat,
+  Valerii Zapodovnikov, Vilhelm Prytz, Yechiel Kalmenson, Zenju on github,
+  (68 contributors)
 
         Thanks! (and sorry if I forgot to mention someone)
 
 References to bug reports and discussions on issues:
 
- [1] = https://curl.haxx.se/bug/?i=4129
- [2] = https://curl.haxx.se/bug/?i=4121
- [3] = https://curl.haxx.se/bug/?i=4131
- [4] = https://curl.haxx.se/bug/?i=3804
- [5] = https://curl.haxx.se/bug/?i=3500
- [6] = https://curl.haxx.se/bug/?i=4144
- [7] = https://curl.haxx.se/bug/?i=4147
- [8] = https://curl.haxx.se/bug/?i=4153
- [9] = https://curl.haxx.se/bug/?i=4143
- [10] = https://curl.haxx.se/bug/?i=4138
- [11] = https://curl.haxx.se/bug/?i=4043
- [12] = https://curl.haxx.se/bug/?i=4084
- [13] = https://curl.haxx.se/bug/?i=4166
- [14] = https://curl.haxx.se/bug/?i=4157
- [15] = https://curl.haxx.se/bug/?i=3780
- [16] = https://curl.haxx.se/bug/?i=4183
- [17] = https://curl.haxx.se/bug/?i=4182
- [18] = https://curl.haxx.se/bug/?i=3992
- [19] = https://curl.haxx.se/bug/?i=4176
- [20] = https://curl.haxx.se/bug/?i=4165
- [21] = https://curl.haxx.se/bug/?i=4174
- [22] = https://curl.haxx.se/bug/?i=3701
- [23] = https://curl.haxx.se/bug/?i=4167
- [24] = https://curl.haxx.se/bug/?i=4172
- [25] = https://curl.haxx.se/bug/?i=4169
- [26] = https://curl.haxx.se/bug/?i=4136
- [27] = https://curl.haxx.se/bug/?i=3653
- [28] = https://curl.haxx.se/bug/?i=4163
- [29] = https://curl.haxx.se/bug/?i=4191
- [30] = https://curl.haxx.se/bug/?i=4192
- [31] = https://curl.haxx.se/bug/?i=4201
- [32] = https://curl.haxx.se/bug/?i=4199
- [33] = https://curl.haxx.se/bug/?i=4197
- [34] = https://curl.haxx.se/bug/?i=4198
- [35] = https://curl.haxx.se/bug/?i=3794
- [36] = https://curl.haxx.se/bug/?i=3794
- [37] = https://curl.haxx.se/bug/?i=4213
- [38] = https://curl.haxx.se/bug/?i=4216
- [39] = https://curl.haxx.se/bug/?i=4187
- [40] = https://curl.haxx.se/bug/?i=4221
- [41] = https://curl.haxx.se/bug/?i=3848
- [42] = https://curl.haxx.se/bug/?i=4223
- [43] = https://curl.haxx.se/bug/?i=4227
- [44] = https://curl.haxx.se/bug/?i=4228
- [45] = https://curl.haxx.se/bug/?i=4188
- [46] = https://curl.haxx.se/bug/?i=4258
- [47] = https://curl.haxx.se/bug/?i=4254
- [48] = https://curl.haxx.se/bug/?i=4248
- [49] = https://curl.haxx.se/bug/?i=4245
- [50] = https://curl.haxx.se/bug/?i=4244
- [51] = https://curl.haxx.se/bug/?i=4241
- [52] = https://curl.haxx.se/bug/?i=4230
- [53] = https://curl.haxx.se/bug/?i=4235
- [54] = https://curl.haxx.se/bug/?i=4273
- [55] = https://curl.haxx.se/bug/?i=4250
- [56] = https://curl.haxx.se/bug/?i=4267
- [57] = https://curl.haxx.se/bug/?i=4290
- [58] = https://curl.haxx.se/bug/?i=4289
- [59] = https://curl.haxx.se/bug/?i=4287
- [60] = https://curl.haxx.se/bug/?i=4286
- [61] = https://curl.haxx.se/bug/?i=4283
- [62] = https://curl.haxx.se/bug/?i=4279
- [63] = https://curl.haxx.se/bug/?i=4302
- [64] = https://curl.haxx.se/docs/CVE-2019-5481.html
- [65] = https://curl.haxx.se/docs/CVE-2019-5482.html
- [66] = https://curl.haxx.se/bug/?i=4307
- [67] = https://curl.haxx.se/bug/?i=4299
- [68] = https://curl.haxx.se/bug/?i=4304
- [69] = https://curl.haxx.se/bug/?i=4315
+ [1] = https://curl.haxx.se/bug/?i=4272
+ [2] = https://curl.haxx.se/bug/?i=4322
+ [3] = https://curl.haxx.se/bug/?i=4328
+ [4] = https://curl.haxx.se/bug/?i=4321
+ [5] = https://curl.haxx.se/mail/lib-2019-09/0061.html
+ [6] = https://curl.haxx.se/bug/?i=4348
+ [7] = https://curl.haxx.se/bug/?i=4368
+ [8] = https://curl.haxx.se/bug/?i=4365
+ [9] = https://curl.haxx.se/bug/?i=4366
+ [10] = https://curl.haxx.se/bug/?i=4363
+ [11] = https://curl.haxx.se/bug/?i=4358
+ [12] = https://curl.haxx.se/bug/?i=4350
+ [13] = https://curl.haxx.se/bug/?i=4352
+ [14] = https://curl.haxx.se/bug/?i=4324
+ [15] = https://curl.haxx.se/bug/?i=4339
+ [16] = https://curl.haxx.se/bug/?i=4332
+ [17] = https://curl.haxx.se/bug/?i=4326
+ [18] = https://curl.haxx.se/bug/?i=4325
+ [19] = https://curl.haxx.se/bug/?i=4336
+ [20] = https://curl.haxx.se/bug/?i=4329
+ [21] = https://curl.haxx.se/bug/?i=4338
+ [22] = https://curl.haxx.se/bug/?i=4349
+ [23] = https://curl.haxx.se/bug/?i=4381
+ [24] = https://curl.haxx.se/bug/?i=4374
+ [25] = https://curl.haxx.se/bug/?i=4392
+ [26] = https://curl.haxx.se/bug/?i=4389
+ [27] = https://curl.haxx.se/bug/?i=4386
+ [28] = https://curl.haxx.se/bug/?i=4382
+ [29] = https://curl.haxx.se/bug/?i=4383
+ [30] = https://curl.haxx.se/bug/?i=4387
+ [31] = https://curl.haxx.se/bug/?i=4395
+ [32] = https://curl.haxx.se/bug/?i=4394
+ [33] = https://curl.haxx.se/bug/?i=4393
+ [34] = https://curl.haxx.se/bug/?i=4397
+ [35] = https://curl.haxx.se/bug/?i=4435
+ [36] = https://curl.haxx.se/bug/?i=4428
+ [37] = https://curl.haxx.se/bug/?i=4415
+ [38] = https://curl.haxx.se/bug/?i=4425
+ [39] = https://curl.haxx.se/bug/?i=4431
+ [40] = https://curl.haxx.se/bug/?i=4430
+ [41] = https://curl.haxx.se/bug/?i=4411
+ [42] = https://curl.haxx.se/bug/?i=3760
+ [43] = https://curl.haxx.se/bug/?i=4421
+ [44] = https://curl.haxx.se/bug/?i=4403
+ [45] = https://curl.haxx.se/bug/?i=4412
+ [46] = https://curl.haxx.se/bug/?i=4407
+ [47] = https://curl.haxx.se/bug/?i=4214
+ [48] = https://curl.haxx.se/bug/?i=4406
+ [49] = https://curl.haxx.se/bug/?i=4400
+ [50] = https://curl.haxx.se/bug/?i=4398
+ [51] = https://curl.haxx.se/bug/?i=4399
+ [52] = https://curl.haxx.se/bug/?i=4445
+ [53] = https://curl.haxx.se/bug/?i=4447
+ [54] = https://curl.haxx.se/bug/?i=4393
+ [55] = https://curl.haxx.se/bug/?i=4410
+ [56] = https://curl.haxx.se/bug/?i=4310
+ [57] = https://curl.haxx.se/bug/?i=4444
+ [58] = https://curl.haxx.se/bug/?i=4433
+ [59] = https://curl.haxx.se/bug/?i=4441
+ [60] = https://curl.haxx.se/bug/?i=4443
+ [61] = https://curl.haxx.se/bug/?i=4437
+ [62] = https://curl.haxx.se/bug/?i=4424
+ [63] = https://curl.haxx.se/bug/?i=4429
+ [64] = https://curl.haxx.se/bug/?i=4465
+ [65] = https://curl.haxx.se/bug/?i=4308
+ [66] = https://curl.haxx.se/bug/?i=4457
+ [67] = https://curl.haxx.se/bug/?i=4455
+ [68] = https://curl.haxx.se/bug/?i=4303
+ [69] = https://curl.haxx.se/bug/?i=4454
+ [70] = https://curl.haxx.se/bug/?i=4446
+ [71] = https://curl.haxx.se/bug/?i=4011
+ [72] = https://curl.haxx.se/bug/?i=4461
+ [73] = https://curl.haxx.se/bug/?i=4422
+ [74] = https://curl.haxx.se/bug/?i=4469
+ [75] = https://curl.haxx.se/bug/?i=4471
+ [76] = https://curl.haxx.se/bug/?i=4467
+ [77] = https://curl.haxx.se/bug/?i=4460
+ [78] = https://curl.haxx.se/bug/?i=4486
+ [79] = https://curl.haxx.se/bug/?i=4498
+ [80] = https://curl.haxx.se/bug/?i=4491
+ [81] = https://curl.haxx.se/bug/?i=4488
+ [82] = https://curl.haxx.se/bug/?i=4484
+ [83] = https://curl.haxx.se/bug/?i=4483
+ [84] = https://curl.haxx.se/bug/?i=4482
+ [85] = https://curl.haxx.se/bug/?i=4466
+ [86] = https://curl.haxx.se/bug/?i=4506
+ [87] = https://curl.haxx.se/bug/?i=4507
+ [88] = https://curl.haxx.se/bug/?i=4496
+ [89] = https://curl.haxx.se/bug/?i=4503
+ [90] = https://curl.haxx.se/bug/?i=4502
+ [91] = https://curl.haxx.se/bug/?i=4511
+ [92] = https://curl.haxx.se/bug/?i=4512
+ [93] = https://curl.haxx.se/bug/?i=4531
+ [94] = https://curl.haxx.se/bug/?i=4526
+ [95] = https://curl.haxx.se/bug/?i=4523
+ [96] = https://curl.haxx.se/bug/?i=4518
+ [97] = https://curl.haxx.se/bug/?i=4520
+ [98] = https://curl.haxx.se/bug/?i=4508
+ [99] = https://curl.haxx.se/bug/?i=4499
+ [100] = https://curl.haxx.se/bug/?i=4551
+ [101] = https://curl.haxx.se/bug/?i=4547
+ [102] = https://curl.haxx.se/bug/?i=4552
+ [103] = https://curl.haxx.se/mail/lib-2019-10/0104.html
+ [104] = https://curl.haxx.se/bug/?i=4487
+ [105] = https://curl.haxx.se/bug/?i=4509
+ [106] = https://curl.haxx.se/bug/?i=4539
+ [107] = https://curl.haxx.se/bug/?i=4463
+ [108] = https://curl.haxx.se/bug/?i=4535
+ [109] = https://curl.haxx.se/bug/?i=4529
+ [110] = https://curl.haxx.se/bug/?i=4555
diff --git a/appveyor.yml b/appveyor.yml
index d54b500fe..9b4ad5cf5 100644
--- a/appveyor.yml
+++ b/appveyor.yml
@@ -13,20 +13,22 @@ environment:
         SHARED: ON
         DISABLED_TESTS: ""
         COMPILER_PATH: ""
-      - APPVEYOR_BUILD_WORKER_IMAGE: "Visual Studio 2017"
+      - APPVEYOR_BUILD_WORKER_IMAGE: "Visual Studio 2019"
         BUILD_SYSTEM: CMake
-        PRJ_GEN: "Visual Studio 15 2017 Win64"
+        PRJ_GEN: "Visual Studio 16 2019"
+        TARGET: "-A x64"
         PRJ_CFG: Debug
         OPENSSL: OFF
         WINSSL: ON
         HTTP_ONLY: OFF
         TESTING: ON
         SHARED: OFF
-        DISABLED_TESTS: ""
+        DISABLED_TESTS: "!1139"
         COMPILER_PATH: ""
-      - APPVEYOR_BUILD_WORKER_IMAGE: "Visual Studio 2017"
+      - APPVEYOR_BUILD_WORKER_IMAGE: "Visual Studio 2019"
         BUILD_SYSTEM: CMake
-        PRJ_GEN: "Visual Studio 15 2017 Win64"
+        PRJ_GEN: "Visual Studio 16 2019"
+        TARGET: "-A x64"
         PRJ_CFG: Release
         OPENSSL: ON
         WINSSL: OFF
@@ -44,29 +46,31 @@ environment:
         HTTP_ONLY: OFF
         TESTING: ON
         SHARED: OFF
-        DISABLED_TESTS: ""
+        DISABLED_TESTS: "!1139"
         COMPILER_PATH: ""
-      - APPVEYOR_BUILD_WORKER_IMAGE: "Visual Studio 2017"
+      - APPVEYOR_BUILD_WORKER_IMAGE: "Visual Studio 2019"
         BUILD_SYSTEM: CMake
-        PRJ_GEN: "Visual Studio 15 2017 Win64"
+        PRJ_GEN: "Visual Studio 16 2019"
+        TARGET: "-A x64"
         PRJ_CFG: Debug
         OPENSSL: OFF
         WINSSL: OFF
         HTTP_ONLY: OFF
         TESTING: ON
         SHARED: OFF
-        DISABLED_TESTS: ""
+        DISABLED_TESTS: "!1139"
         COMPILER_PATH: ""
-      - APPVEYOR_BUILD_WORKER_IMAGE: "Visual Studio 2017"
+      - APPVEYOR_BUILD_WORKER_IMAGE: "Visual Studio 2019"
         BUILD_SYSTEM: CMake
-        PRJ_GEN: "Visual Studio 15 2017 Win64"
+        PRJ_GEN: "Visual Studio 16 2019"
+        TARGET: "-A x64"
         PRJ_CFG: Debug
         OPENSSL: OFF
         WINSSL: OFF
         HTTP_ONLY: ON
         TESTING: ON
         SHARED: OFF
-        DISABLED_TESTS: ""
+        DISABLED_TESTS: "!1139"
         COMPILER_PATH: ""
       - APPVEYOR_BUILD_WORKER_IMAGE: "Visual Studio 2015"
         BUILD_SYSTEM: CMake
@@ -77,10 +81,23 @@ environment:
         HTTP_ONLY: OFF
         TESTING: ON
         SHARED: OFF
-        DISABLED_TESTS: "!198"
+        DISABLED_TESTS: "!198 !1139"
         COMPILER_PATH: 
"C:\\mingw-w64\\x86_64-8.1.0-posix-seh-rt_v6-rev0\\mingw64\\bin"
         MSYS2_ARG_CONV_EXCL: "/*"
         BUILD_OPT: -k
+      - APPVEYOR_BUILD_WORKER_IMAGE: "Visual Studio 2015"
+        BUILD_SYSTEM: CMake
+        PRJ_GEN: "MSYS Makefiles"
+        PRJ_CFG: Debug
+        OPENSSL: OFF
+        WINSSL: ON
+        HTTP_ONLY: OFF
+        TESTING: ON
+        SHARED: OFF
+        DISABLED_TESTS: "!1139"
+        COMPILER_PATH: 
"C:\\mingw-w64\\i686-6.3.0-posix-dwarf-rt_v5-rev1\\mingw32\\bin"
+        MSYS2_ARG_CONV_EXCL: "/*"
+        BUILD_OPT: -k
       - APPVEYOR_BUILD_WORKER_IMAGE: "Visual Studio 2015"
         BUILD_SYSTEM: CMake
         PRJ_GEN: "MSYS Makefiles"
@@ -90,15 +107,52 @@ environment:
         HTTP_ONLY: OFF
         TESTING: ON
         SHARED: OFF
-        DISABLED_TESTS: ""
+        DISABLED_TESTS: "!1139"
         COMPILER_PATH: "C:\\MinGW\\bin"
         MSYS2_ARG_CONV_EXCL: "/*"
         BUILD_OPT: -k
+      - APPVEYOR_BUILD_WORKER_IMAGE: "Visual Studio 2015"
+        BUILD_SYSTEM: winbuild_vs2015
+        DEBUG: yes
+        PATHPART: debug
+        TESTING: OFF
+      - APPVEYOR_BUILD_WORKER_IMAGE: "Visual Studio 2015"
+        BUILD_SYSTEM: winbuild_vs2015
+        DEBUG: no
+        PATHPART: release
+        TESTING: OFF
+      - APPVEYOR_BUILD_WORKER_IMAGE: "Visual Studio 2017"
+        BUILD_SYSTEM: winbuild_vs2017
+        DEBUG: yes
+        PATHPART: debug
+        TESTING: OFF
+      - APPVEYOR_BUILD_WORKER_IMAGE: "Visual Studio 2017"
+        BUILD_SYSTEM: winbuild_vs2017
+        DEBUG: no
+        PATHPART: release
+        TESTING: OFF
       - APPVEYOR_BUILD_WORKER_IMAGE: "Visual Studio 2017"
         BUILD_SYSTEM: VisualStudioSolution
         PRJ_CFG: "DLL Debug - DLL Windows SSPI - DLL WinIDN"
         TESTING: OFF
         VC_VERSION: VC15
+      - APPVEYOR_BUILD_WORKER_IMAGE: "Visual Studio 2019"
+        BUILD_SYSTEM: CMake
+        PRJ_GEN: "Visual Studio 16 2019"
+        TARGET: "-A ARM64"
+        PRJ_CFG: Release
+        OPENSSL: OFF
+        WINSSL: ON
+        HTTP_ONLY: OFF
+        TESTING: OFF
+        SHARED: OFF
+        DISABLED_TESTS: ""
+        COMPILER_PATH: ""
+      - APPVEYOR_BUILD_WORKER_IMAGE: "Visual Studio 2015"
+        BUILD_SYSTEM: autotools
+        TESTING: ON
+        DISABLED_TESTS: "!19 !1056 !1233 !1242 !1243 !2002 !2003"
+        CONFIG_ARGS: "--enable-debug --enable-werror --enable-alt-svc 
--disable-threaded-resolver --disable-proxy"
 
 install:
     - set "PATH=C:\msys64\usr\bin;%PATH%"
@@ -109,6 +163,7 @@ build_script:
     - if %BUILD_SYSTEM%==CMake (
         cmake .
         -G"%PRJ_GEN%"
+        %TARGET%
         -DCMAKE_USE_OPENSSL=%OPENSSL%
         -DCMAKE_USE_WINSSL=%WINSSL%
         -DHTTP_ONLY=%HTTP_ONLY%
@@ -120,18 +175,44 @@ build_script:
         -DCMAKE_RUNTIME_OUTPUT_DIRECTORY_DEBUG=""
         -DCMAKE_INSTALL_PREFIX="C:/CURL"
         -DCMAKE_BUILD_TYPE=%PRJ_CFG% &&
-        cmake --build . --config %PRJ_CFG% --clean-first -- %BUILD_OPT%) else (
+        cmake --build . --config %PRJ_CFG% --parallel 2 --clean-first -- 
%BUILD_OPT%
+      ) else (
       if %BUILD_SYSTEM%==VisualStudioSolution (
         cd projects &&
         .\\generate.bat %VC_VERSION% &&
-        msbuild.exe /p:Configuration="%PRJ_CFG%" 
"Windows\\%VC_VERSION%\\curl-all.sln" ))
+        msbuild.exe /p:Configuration="%PRJ_CFG%" 
"Windows\\%VC_VERSION%\\curl-all.sln"
+      ) else (
+      if %BUILD_SYSTEM%==winbuild_vs2015 (
+        call buildconf.bat &&
+        cd winbuild &&
+        call "C:\Program Files\Microsoft SDKs\Windows\v7.1\Bin\SetEnv.cmd" 
/x64 &&
+        call "C:\Program Files (x86)\Microsoft Visual Studio 
14.0\VC\vcvarsall.bat" x86_amd64 &&
+        nmake /f Makefile.vc mode=dll VC=14 "SSL_PATH=C:\OpenSSL-v111-Win64" 
WITH_SSL=dll MACHINE=x64 DEBUG=%DEBUG% &&
+        
..\builds\libcurl-vc14-x64-%PATHPART%-dll-ssl-dll-ipv6-sspi\bin\curl.exe -V
+      ) else (
+      if %BUILD_SYSTEM%==winbuild_vs2017 (
+        call buildconf.bat &&
+        cd winbuild &&
+        call "C:\Program Files (x86)\Microsoft Visual 
Studio\2017\Community\VC\Auxiliary\Build\vcvars64.bat" &&
+        nmake /f Makefile.vc mode=dll VC=15 "SSL_PATH=C:\OpenSSL-v111-Win64" 
WITH_SSL=dll MACHINE=x64 DEBUG=%DEBUG% &&
+        
..\builds\libcurl-vc15-x64-%PATHPART%-dll-ssl-dll-ipv6-sspi\bin\curl.exe -V
+      ) else (
+      if %BUILD_SYSTEM%==autotools (
+        bash.exe -e -l -c "cd /c/projects/curl && ./buildconf && ./configure 
%CONFIG_ARGS% && make && make examples && cd tests && make"
+      )))))
 
 test_script:
     - if %TESTING%==ON (
-        bash.exe -e -l -c "cd /c/projects/curl/tests && ./runtests.pl -a -p 
!flaky !1139 %DISABLED_TESTS%" )
+        bash.exe -e -l -c "cd /c/projects/curl/tests && ./runtests.pl -a -p 
!flaky %DISABLED_TESTS%" )
 
 # whitelist branches to avoid testing feature branches twice (as branch and as 
pull request)
 branches:
     only:
         - master
         - /\/ci$/
+
+artifacts:
+  - path: '**/curl.exe'
+    name: curl
+  - path: '**/libcurl.dll'
+    name: libcurl
diff --git a/aux-gnurl/.gitignore b/aux-gnurl/.gitignore
new file mode 100644
index 000000000..24600083d
--- /dev/null
+++ b/aux-gnurl/.gitignore
@@ -0,0 +1 @@
+!Makefile
diff --git a/aux-gnurl/Makefile b/aux-gnurl/Makefile
new file mode 100644
index 000000000..daee65456
--- /dev/null
+++ b/aux-gnurl/Makefile
@@ -0,0 +1,22 @@
+.PHONY: all
+all: makefiles allfiles
+
+.PHONY: makefiles
+makefiles:
+       ./gnurl0.awk ../docs/libcurl/opts/Makefile.inc > 
../docs/libcurl/opts/Makefile.inc.tmp; mv ../docs/libcurl/opts/Makefile.inc.tmp 
../docs/libcurl/opts/Makefile.inc
+
+# manfiles:
+
+.PHONY: allfiles
+allfiles:
+       sh ./gnurl1.sh
+
+.PHONY: lint
+lint:
+       sh ./man_lint.sh
+
+.PHONE: clean
+clean:
+       git restore ..
+
+.include <bsd.prog.mk>
diff --git a/aux-gnurl/gnurl0.awk b/aux-gnurl/gnurl0.awk
new file mode 100755
index 000000000..955bc9763
--- /dev/null
+++ b/aux-gnurl/gnurl0.awk
@@ -0,0 +1,8 @@
+#!/usr/bin/awk -f
+
+{
+    gsub("CURLOPT_","GNURLOPT_");
+    gsub("CURLMOPT_","GNURLMOPT_");
+    gsub("CURLINFO_","GNURLINFO_");
+    print $0
+}
diff --git a/aux-gnurl/gnurl1.awk b/aux-gnurl/gnurl1.awk
new file mode 100755
index 000000000..a6133f835
--- /dev/null
+++ b/aux-gnurl/gnurl1.awk
@@ -0,0 +1,10 @@
+#!/usr/bin/awk -f
+
+{
+    gsub("curl/curl.h","gnurl/curl.h");
+    gsub("TH curl_", "TH gnurl_");
+    gsub("TH CURL", "TH GNURL");
+    gsub("libcurl Manual", "libgnurl Manual");
+    gsub("man3/curl", "man3/gnurl");
+    print $0
+}
diff --git a/aux-gnurl/gnurl1.sh b/aux-gnurl/gnurl1.sh
new file mode 100644
index 000000000..17865009c
--- /dev/null
+++ b/aux-gnurl/gnurl1.sh
@@ -0,0 +1,36 @@
+_ALLFILES=$(find .. \
+                 ! -path '*.git/*' \
+                 ! -path '*aux-gnurl/**' \
+                 ! -path '*.der' \
+                 ! -path '*.in' \
+                 ! -path '*.Po' \
+                 ! -path '*.tar*' \
+                 ! -path '*m4/*' \
+                 ! -path '*.pax*' \
+                 ! -path '*.sum*' \
+                 ! -path '*.asc' \
+                 ! -path '*.sig' \
+                 ! -path '*build/*' \
+                 ! -path '*autom4te.cache/*' \
+                 ! -path '*.Plo' \
+                 ! -path '*.github/*' \
+                 ! -path '*.o' \
+                 ! -path '*.lo' \
+                 ! -path '*projects/Windows/*' \
+                 ! -path '*.d' \
+                 ! -path '*.key' \
+                 ! -path '*.pem' \
+                 ! -path '*.crl' \
+                 ! -path '*.csr' \
+                 ! -path '*.pub' \
+                 ! -path '*.prm' \
+                 ! -path '*.md*' \
+                 ! -path '*MacOSX*' \
+                 ! -path '*docs/TODO*' \
+                 ! -path '*src/macos*' \
+                 ! -path '*.crt' \
+                 -type f -exec grep -Il '.' {} \; | xargs -L 1 echo)
+for f in ${_ALLFILES}; do
+        oldmode=$(stat -f '%OLp' ${f})
+       ./gnurl1.awk ${f} > ${f}.tmp ; mv ${f}.tmp ${f} ; chmod $oldmode ${f}
+done
diff --git a/aux-gnurl/man_lint.sh b/aux-gnurl/man_lint.sh
index 7adf1501e..b43d2e0ee 100755
--- a/aux-gnurl/man_lint.sh
+++ b/aux-gnurl/man_lint.sh
@@ -1,6 +1,6 @@
 #!/bin/sh
 # spit out ONLY error messages using groff.
-for f in `find 'docs/' -name \*\.[1-9]`;
+for f in `find '../docs/' -name \*\.[1-9]`;
 do
     LC_ALL=en_US.UTF-8 \
     MANROFFSEQ='' \
@@ -8,4 +8,4 @@ do
     groff -m mandoc -b -z -w w $f;
 done
 # spit out ONLY error messages with mandoc:
-mandoc -T lint `find 'docs/' -name \*\.[1-9]`
+mandoc -T lint `find '../docs/' -name \*\.[1-9]`
diff --git a/aux-gnurl/sed.sh b/aux-gnurl/sed.sh
deleted file mode 100755
index 605a5eebc..000000000
--- a/aux-gnurl/sed.sh
+++ /dev/null
@@ -1,31 +0,0 @@
-#!/bin/sh
-
-S=$HOME/src/gnunet/gnurl
-
-# /
-# find . -not -iwholename '*.git*' -not -iwholename '*sed.sh*' -type f -print0 
| xargs -0 sed -i 's/<curl\/curl.h>/<gnurl\/curl.h>/g'
-find . ! -path '*.git/*' ! -path '*sed.sh*' -type f -print0 | xargs -0 sed -i 
's/<curl\/curl.h>/<gnurl\/curl.h>/g'
-echo "'curl/curl.h' -> 'gnurl/curl.h' ... [DONE]"
-
-# docs/
-cd $S/docs
-# find . -not -iwholename '*.git*' -not -iwholename '*sed.sh*' -type f -print0 
| xargs -0 sed -i 's/TH curl_/TH gnurl_/g'
-find . ! -path '*.git/*' ! -path '*sed.sh*' -type f -print0 | xargs -0 sed -i 
's/TH curl_/TH gnurl_/g'
-echo "'TH curl_' -> 'TH gnurl_' ... [DONE]"
-
-find . ! -path '*.git/*' ! -path '*sed.sh*' -type f -print0 | xargs -0 sed -i 
's/TH CURL/TH GNURL/g'
-echo "'TH CURL' -> 'TH GNURL' ... [DONE]"
-
-
-# find . -not -iwholename '*.git*' -not -iwholename '*sed.sh*' -type f -print0 
| xargs -0 sed -i 's/libcurl Manual/libgnurl Manual/g'
-find . ! -path '*.git/*' ! -path '*sed.sh*' -type f -print0 | xargs -0 sed -i 
's/libcurl Manual/libgnurl Manual/g'
-echo "'libcurl Manual' -> 'libgnurl Manual' ... [DONE]"
-
-# find . -not -iwholename '*.git*' -not -iwholename '*sed.sh*' -type f -print0 
| xargs -0 sed -i 's/man3\/curl/man3\/gnurl/g'
-find . ! -path '*.git/*' ! -path '*sed.sh*' -type f -print0 | xargs -0 sed -i 
's/man3\/curl/man3\/gnurl/g'
-echo "'man3/curl' -> 'man3/gnurl' ... [DONE]"
-
-# TODO: groff -> mdoc
-# find . ! -path '*.git/*' ! -path '*sed.sh*' -type f -print0 | xargs -0 sed 
-i 's/.SH/.Sh/g'
-
-cd $S
diff --git a/buildconf.bat b/buildconf.bat
index 8511a1fcb..043523315 100644
--- a/buildconf.bat
+++ b/buildconf.bat
@@ -6,7 +6,7 @@ rem *                             / __| | | | |_) | |
 rem *                            | (__| |_| |  _ <| |___
 rem *                             \___|\___/|_| \_\_____|
 rem *
-rem * Copyright (C) 1998 - 2017, Daniel Stenberg, <address@hidden>, et al.
+rem * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
 rem *
 rem * This software is licensed as described in the file COPYING, which
 rem * you should have received as part of this distribution. The terms
diff --git a/configure.ac b/configure.ac
index a7585da9f..8b91b954a 100755
--- a/configure.ac
+++ b/configure.ac
@@ -49,6 +49,7 @@ CURL_CHECK_OPTION_CURLDEBUG
 CURL_CHECK_OPTION_SYMBOL_HIDING
 CURL_CHECK_OPTION_ARES
 CURL_CHECK_OPTION_RT
+CURL_CHECK_OPTION_ESNI
 
 XC_CHECK_PATH_SEPARATOR
 
@@ -1307,10 +1308,6 @@ main()
   ipv6=yes
 ))
 
-if test "$ipv6" = "yes"; then
-  curl_ipv6_msg="enabled"
-fi
-
 # Check if struct sockaddr_in6 have sin6_scope_id member
 if test "$ipv6" = yes; then
   AC_MSG_CHECKING([if struct sockaddr_in6 has sin6_scope_id member])
@@ -2302,12 +2299,7 @@ OPT_WOLFSSL=no
 
 _cppflags=$CPPFLAGS
 _ldflags=$LDFLAGS
-AC_ARG_WITH(cyassl,dnl
-AC_HELP_STRING([--with-cyassl=PATH],[where to look for CyaSSL, PATH points to 
the installation root (default: system lib default)])
-AC_HELP_STRING([--without-cyassl], [disable CyaSSL detection]),
-  OPT_WOLFSSL=$withval)
 
-dnl provide --with-wolfssl as an alias for --with-cyassl
 AC_ARG_WITH(wolfssl,dnl
 AC_HELP_STRING([--with-wolfssl=PATH],[where to look for WolfSSL, PATH points 
to the installation root (default: system lib default)])
 AC_HELP_STRING([--without-wolfssl], [disable WolfSSL detection]),
@@ -2322,86 +2314,33 @@ if test -z "$ssl_backends" -o "x$OPT_WOLFSSL" != xno; 
then
       OPT_WOLFSSL=""
     fi
 
-    dnl This should be reworked to use pkg-config instead
-
-    cyassllibname=cyassl
-
-    if test -z "$OPT_WOLFSSL" ; then
-      dnl check for lib in system default first
-
-      AC_CHECK_LIB(cyassl, CyaSSL_Init,
-      dnl libcyassl found, set the variable
-       [
-         AC_DEFINE(USE_WOLFSSL, 1, [if wolfSSL is enabled])
-         AC_SUBST(USE_WOLFSSL, [1])
-         WOLFSSL_ENABLED=1
-         USE_WOLFSSL="yes"
-         ssl_msg="CyaSSL"
-        test cyassl != "$DEFAULT_SSL_BACKEND" || VALID_DEFAULT_SSL_BACKEND=yes
-        ])
-    fi
-
     addld=""
     addlib=""
     addcflags=""
-    cyassllib=""
 
     if test "x$USE_WOLFSSL" != "xyes"; then
-      dnl add the path and test again
       addld=-L$OPT_WOLFSSL/lib$libsuff
       addcflags=-I$OPT_WOLFSSL/include
-      cyassllib=$OPT_WOLFSSL/lib$libsuff
+      wolfssllibpath=$OPT_WOLFSSL/lib$libsuff
 
       LDFLAGS="$LDFLAGS $addld"
       if test "$addcflags" != "-I/usr/include"; then
          CPPFLAGS="$CPPFLAGS $addcflags"
       fi
 
-      AC_CHECK_LIB(cyassl, CyaSSL_Init,
-       [
-       AC_DEFINE(USE_WOLFSSL, 1, [if CyaSSL is enabled])
-       AC_SUBST(USE_WOLFSSL, [1])
-       WOLFSSL_ENABLED=1
-       USE_WOLFSSL="yes"
-       ssl_msg="CyaSSL"
-       test cyassl != "$DEFAULT_SSL_BACKEND" || VALID_DEFAULT_SSL_BACKEND=yes
-       ],
-       [
-         CPPFLAGS=$_cppflags
-         LDFLAGS=$_ldflags
-         cyassllib=""
-       ])
-    fi
-
-    addld=""
-    addlib=""
-    addcflags=""
-
-    if test "x$USE_WOLFSSL" != "xyes"; then
-      dnl libcyassl renamed to libwolfssl as of 3.4.0
-      addld=-L$OPT_WOLFSSL/lib$libsuff
-      addcflags=-I$OPT_WOLFSSL/include
-      cyassllib=$OPT_WOLFSSL/lib$libsuff
-
-      LDFLAGS="$LDFLAGS $addld"
-      if test "$addcflags" != "-I/usr/include"; then
-         CPPFLAGS="$CPPFLAGS $addcflags"
-      fi
-
-      cyassllibname=wolfssl
       my_ac_save_LIBS="$LIBS"
-      LIBS="-l$cyassllibname -lm $LIBS"
+      LIBS="-lwolfssl -lm $LIBS"
 
-      AC_MSG_CHECKING([for CyaSSL_Init in -lwolfssl])
+      AC_MSG_CHECKING([for wolfSSL_Init in -lwolfssl])
       AC_LINK_IFELSE([
        AC_LANG_PROGRAM([[
 /* These aren't needed for detection and confuse WolfSSL.
    They are set up properly later if it is detected.  */
 #undef SIZEOF_LONG
 #undef SIZEOF_LONG_LONG
-#include <cyassl/ssl.h>
+#include <wolfssl/ssl.h>
        ]],[[
-         return CyaSSL_Init();
+         return wolfSSL_Init();
        ]])
       ],[
          AC_MSG_RESULT(yes)
@@ -2410,25 +2349,25 @@ if test -z "$ssl_backends" -o "x$OPT_WOLFSSL" != xno; 
then
          WOLFSSL_ENABLED=1
          USE_WOLFSSL="yes"
          ssl_msg="WolfSSL"
-        test cyassl != "$DEFAULT_SSL_BACKEND" || VALID_DEFAULT_SSL_BACKEND=yes
+        test wolfssl != "$DEFAULT_SSL_BACKEND" || VALID_DEFAULT_SSL_BACKEND=yes
        ],
        [
          AC_MSG_RESULT(no)
          CPPFLAGS=$_cppflags
          LDFLAGS=$_ldflags
-         cyassllib=""
+         wolfssllibpath=""
        ])
       LIBS="$my_ac_save_LIBS"
     fi
 
     if test "x$USE_WOLFSSL" = "xyes"; then
-      AC_MSG_NOTICE([detected $cyassllibname])
+      AC_MSG_NOTICE([detected wolfSSL])
       check_for_ca_bundle=1
 
-      dnl cyassl/ctaocrypt/types.h needs SIZEOF_LONG_LONG defined!
+      dnl wolfssl/ctaocrypt/types.h needs SIZEOF_LONG_LONG defined!
       AX_COMPILE_CHECK_SIZEOF(long long)
 
-      LIBS="-l$cyassllibname -lm $LIBS"
+      LIBS="-lwolfssl -lm $LIBS"
 
       dnl Recent WolfSSL versions build without SSLv3 by default
       dnl WolfSSL needs configure --enable-opensslextra to have *get_peer*
@@ -2436,15 +2375,15 @@ if test -z "$ssl_backends" -o "x$OPT_WOLFSSL" != xno; 
then
                      wolfSSL_get_peer_certificate \
                      wolfSSL_UseALPN)
 
-      if test -n "$cyassllib"; then
+      if test -n "$wolfssllibpath"; then
         dnl when shared libs were found in a path that the run-time
         dnl linker doesn't search through, we need to add it to
         dnl CURL_LIBRARY_PATH to prevent further configure tests to fail
         dnl due to this
         if test "x$cross_compiling" != "xyes"; then
-          CURL_LIBRARY_PATH="$CURL_LIBRARY_PATH:$cyassllib"
+          CURL_LIBRARY_PATH="$CURL_LIBRARY_PATH:$wolfssllibpath"
           export CURL_LIBRARY_PATH
-          AC_MSG_NOTICE([Added $cyassllib to CURL_LIBRARY_PATH])
+          AC_MSG_NOTICE([Added $wolfssllibpath to CURL_LIBRARY_PATH])
         fi
       fi
 
@@ -4123,6 +4062,7 @@ if test "$ipv6" = "yes"; then
     AC_DEFINE(ENABLE_IPV6, 1, [Define if you want to enable IPv6 support])
     IPV6_ENABLED=1
     AC_SUBST(IPV6_ENABLED)
+    curl_ipv6_msg="enabled"
   fi
 fi
 
@@ -4601,6 +4541,36 @@ if test "$enable_altsvc" = "yes"; then
   experimental="$experimental alt-svc"
 fi
 
+dnl *************************************************************
+dnl check whether ESNI support, if desired, is actually available
+dnl
+if test "x$want_esni" != "xno"; then
+  AC_MSG_CHECKING([whether ESNI support is available])
+
+  dnl assume NOT and look for sufficient condition
+  ESNI_ENABLED=0
+  ESNI_SUPPORT=''
+
+  dnl OpenSSL with a chosen ESNI function should be enough
+  dnl so more exhaustive checking seems unnecessary for now
+  if test "x$OPENSSL_ENABLED" == "x1"; then
+    AC_CHECK_FUNCS(SSL_get_esni_status,
+      ESNI_SUPPORT="ESNI support available (OpenSSL with SSL_get_esni_status)"
+      ESNI_ENABLED=1)
+
+  dnl add 'elif' chain here for additional implementations
+  fi
+
+  dnl now deal with whatever we found
+  if test "x$ESNI_ENABLED" == "x1"; then
+    AC_DEFINE(USE_ESNI, 1, [if ESNI support is available])
+    AC_MSG_RESULT($ESNI_SUPPORT)
+    experimental="$experimental ESNI"
+  else
+    AC_MSG_ERROR([--enable-esni ignored: No ESNI support found])
+  fi
+fi
+
 dnl ************************************************************
 dnl hiding of library internal symbols
 dnl
@@ -4722,6 +4692,10 @@ if test "x$OPENSSL_ENABLED" = "x1" -o "x$GNUTLS_ENABLED" 
= "x1" \
   SUPPORT_FEATURES="$SUPPORT_FEATURES HTTPS-proxy"
 fi
 
+if test "x$ESNI_ENABLED" = "x1"; then
+  SUPPORT_FEATURES="$SUPPORT_FEATURES ESNI"
+fi
+
 AC_SUBST(SUPPORT_FEATURES)
 
 dnl For supported protocols in pkg-config file
@@ -4905,6 +4879,7 @@ AC_MSG_NOTICE([Configured to build gnurl/libgnurl:
   Alt-svc:          ${curl_altsvc_msg}
   HTTP2:            ${curl_h2_msg}
   HTTP3:            ${curl_h3_msg}
+  ESNI:             ${curl_esni_msg}
   Protocols:        ${SUPPORT_PROTOCOLS}
   Features:         ${SUPPORT_FEATURES}
   valgrind tests:   ${valgrind_msg}
diff --git a/docs/BINDINGS.md b/docs/BINDINGS.md
index b3624b1cb..d0e80b8ac 100644
--- a/docs/BINDINGS.md
+++ b/docs/BINDINGS.md
@@ -23,6 +23,8 @@ Requests](https://github.com/whoshuu/cpr) by Huu Nguyen
 Cocoa: [BBHTTP](https://github.com/brunodecarvalho/BBHTTP) written by Bruno de 
Carvalho
 [curlhandle](https://github.com/karelia/curlhandle) Written by Dan Wood
 
+Clojure: [clj-curl](https://github.com/lsevero/clj-curl) by Lucas Severo
+
 [D](https://dlang.org/library/std/net/curl.html) Written by Kenneth Bogert
 
 [Delphi](https://github.com/Mercury13/curl4delphi) Written by Mikhail Merkuryev
@@ -53,6 +55,8 @@ Go: [go-curl](https://github.com/andelf/go-curl) by ShuYu Wang
 
 [Julia](https://github.com/forio/Curl.jl) Written by Paul Howe
 
+[Kapito](https://github.com/puzza007/katipo) is an Erlang HTTP library around 
libcurl.
+
 [Lisp](https://common-lisp.net/project/cl-curl/) Written by Liam Healy
 
 Lua: [luacurl](http://luacurl.luaforge.net/) by Alexander Marinov, 
[Lua-cURL](https://github.com/Lua-cURL) by Jürgen Hötzel
@@ -61,6 +65,8 @@ Lua: [luacurl](http://luacurl.luaforge.net/) by Alexander 
Marinov, [Lua-cURL](ht
 
 [.NET](https://sourceforge.net/projects/libcurl-net/) libcurl-net by Jeffrey 
Phillips
 
+[Nim](https://nimble.directory/pkg/libcurl) wrapper for libcurl
+
 [node.js](https://github.com/JCMais/node-libcurl) node-libcurl by Jonathan 
Cardoso Machado
 
 
[Object-Pascal](https://web.archive.org/web/20020610214926/www.tekool.com/opcurl)
 Free Pascal, Delphi and Kylix binding written by Christophe Espern.
@@ -69,14 +75,17 @@ Lua: [luacurl](http://luacurl.luaforge.net/) by Alexander 
Marinov, [Lua-cURL](ht
 
 
[Pascal](https://web.archive.org/web/20030804091414/houston.quik.com/jkp/curlpas/)
 Free Pascal, Delphi and Kylix binding written by Jeffrey Pohlmeyer.
 
-Perl: [WWW--Curl](https://github.com/szbalint/WWW--Curl) Maintained by Cris
+Perl: [WWW::Curl](https://github.com/szbalint/WWW--Curl) Maintained by Cris
 Bailiff and Bálint Szilakszi,
 [perl6-net-curl](https://github.com/azawawi/perl6-net-curl) by Ahmad M. Zawawi
+[NET::Curl](https://metacpan.org/pod/Net::Curl) by Przemyslaw Iskra
 
 [PHP](https://php.net/curl) Originally written by Sterling Hughes
 
 [PostgreSQL](https://github.com/pramsey/pgsql-http) - HTTP client for 
PostgreSQL
 
+[PureBasic](https://www.purebasic.com/documentation/http/index.html) uses 
libcurl in its "native" HTTP subsystem
+
 [Python](http://pycurl.io/) PycURL by Kjetil Jacobsen
 
 [R](https://cran.r-project.org/package=curl)
diff --git a/docs/ESNI.md b/docs/ESNI.md
new file mode 100644
index 000000000..eefb6662b
--- /dev/null
+++ b/docs/ESNI.md
@@ -0,0 +1,139 @@
+# TLS: ESNI support in curl and libcurl
+
+## Summary
+
+**ESNI** means **Encrypted Server Name Indication**, a TLS 1.3
+extension which is currently the subject of an
+[IETF Draft][tlsesni].
+
+This file is intended to show the latest current state of ESNI support
+in **curl** and **libcurl**.
+
+At end of August 2019, an [experimental fork of curl][niallorcurl],
+built using an [experimental fork of OpenSSL][sftcdopenssl], which in
+turn provided an implementation of ESNI, was demonstrated
+interoperating with a server belonging to the [DEfO
+Project][defoproj].
+
+Further sections here describe
+
+-   resources needed for building and demonstrating **curl** support
+    for ESNI,
+
+-   progress to date,
+
+-   TODO items, and
+
+-   additional details of specific stages of the progress.
+
+## Resources needed
+
+To build and demonstrate ESNI support in **curl** and/or **libcurl**,
+you will need
+
+-   a TLS library, supported by **libcurl**, which implements ESNI;
+
+-   an edition of **curl** and/or **libcurl** which supports the ESNI
+    implementation of the chosen TLS library;
+
+-   an environment for building and running **curl**, and at least
+    building **OpenSSL**;
+
+-   a server, supporting ESNI, against which to run a demonstration
+    and perhaps a specific target URL;
+
+-   some instructions.
+
+The following set of resources is currently known to be available.
+
+| Set  | Component    | Location                      | Remarks                
                    |
+|:-----|:-------------|:------------------------------|:-------------------------------------------|
+| DEfO | TLS library  | [sftcd/openssl][sftcdopenssl] | Tag *esni-2019-08-30* 
avoids bleeding edge |
+|      | curl fork    | [niallor/curl][niallorcurl]   | Tag *esni-2019-08-30* 
likewise             |
+|      | instructions | [ESNI-README][niallorreadme]  |                        
                    |
+
+## Progress
+
+### PR 4011 (Jun 2019) expected in curl release 7.67.0 (Oct 2019)
+
+-   Details [below](#pr4011);
+
+-   New **curl** feature: `CURL_VERSION_ESNI`;
+
+-   New configuration option: `--enable-esni`;
+
+-   Build-time check for availability of resources needed for ESNI
+    support;
+
+-   Pre-processor symbol `USE_ESNI` for conditional compilation of
+    ESNI support code, subject to configuration option and
+    availability of needed resources.
+
+## TODO
+
+-   (next PR) Add libcurl options to set ESNI parameters.
+
+-   (next PR) Add curl tool command line options to set ESNI parameters.
+
+-   (WIP) Extend DoH functions so that published ESNI parameters can be
+    retrieved from DNS instead of being required as options.
+
+-   (WIP) Work with OpenSSL community to finalize ESNI API.
+
+-   Track OpenSSL ESNI API in libcurl
+
+-   Identify and implement any changes needed for CMake.
+
+-   Optimize build-time checking of available resources.
+
+-   Encourage ESNI support work on other TLS/SSL backends.
+
+## Additional detail
+
+### PR 4011
+
+**TLS: Provide ESNI support framework for curl and libcurl**
+
+The proposed change provides a framework to facilitate work to
+implement ESNI support in curl and libcurl. It is not intended
+either to provide ESNI functionality or to favour any particular
+TLS-providing backend. Specifically, the change reserves a
+feature bit for ESNI support (symbol `CURL_VERSION_ESNI`),
+implements setting and reporting of this bit, includes dummy
+book-keeping for the symbol, adds a build-time configuration
+option (`--enable-esni`), provides an extensible check for
+resources available to provide ESNI support, and defines a
+compiler pre-processor symbol (`USE_ESNI`) accordingly.
+
+Proposed-by: @niallor (Niall O'Reilly)\
+Encouraged-by: @sftcd (Stephen Farrell)\
+See-also: [this message](https://curl.haxx.se/mail/lib-2019-05/0108.html)
+
+Limitations:
+-   Book-keeping (symbols-in-versions) needs real release number, not 'DUMMY'.
+
+-   Framework is incomplete, as it covers autoconf, but not CMake.
+
+-   Check for available resources, although extensible, refers only to
+    specific work in progress ([described
+    here](https://github.com/sftcd/openssl/tree/master/esnistuff)) to
+    implement ESNI for OpenSSL, as this is the immediate motivation
+    for the proposed change.
+
+## References
+
+CloudFlare blog: [Encrypting SNI: Fixing One of the Core Internet 
Bugs][corebug]
+
+Cloudflare blog: [Encrypt it or lose it: how encrypted SNI works][esniworks]
+
+IETF Draft: [Encrypted Server Name Indication for TLS 1.3][tlsesni]
+
+---
+
+[tlsesni]:             https://datatracker.ietf.org/doc/draft-ietf-tls-esni/
+[esniworks]:   https://blog.cloudflare.com/encrypted-sni/
+[corebug]:             https://blog.cloudflare.com/esni/
+[defoproj]:            https://defo.ie/
+[sftcdopenssl]: https://github.com/sftcd/openssl/
+[niallorcurl]: https://github.com/niallor/curl/
+[niallorreadme]: https://github.com/niallor/curl/blob/master/ESNI-README.md
diff --git a/docs/HTTP3.md b/docs/HTTP3.md
index 1e9b183c4..2dbd25688 100644
--- a/docs/HTTP3.md
+++ b/docs/HTTP3.md
@@ -33,7 +33,7 @@ in the master branch using pull-requests, just like ordinary 
changes.
 
 Build (patched) OpenSSL
 
-     % git clone --depth 1 -b openssl-quic-draft-22 
https://github.com/tatsuhiro-t/openssl
+     % git clone --depth 1 -b openssl-quic-draft-23 
https://github.com/tatsuhiro-t/openssl
      % cd openssl
      % ./config enable-tls1_3 --prefix=<somewhere1>
      % make
@@ -52,10 +52,10 @@ Build nghttp3
 Build ngtcp2
 
      % cd ..
-     % git clone -b draft-22 https://github.com/ngtcp2/ngtcp2
+     % git clone https://github.com/ngtcp2/ngtcp2
      % cd ngtcp2
      % autoreconf -i
-     % ./configure 
PKG_CONFIG_PATH=<somewhere1>/lib/pkgconfig:<somewhere2>/lib/pkgconfig 
LDFLAGS="-Wl,-rpath,<somehere1>/lib" --prefix==<somewhere3>
+     % ./configure 
PKG_CONFIG_PATH=<somewhere1>/lib/pkgconfig:<somewhere2>/lib/pkgconfig 
LDFLAGS="-Wl,-rpath,<somewhere1>/lib" --prefix=<somewhere3>
      % make
      % make install
 
@@ -65,18 +65,9 @@ Build curl
      % git clone https://github.com/curl/curl
      % cd curl
      % ./buildconf
-     % LDFLAGS="-Wl,-rpath,<somewhere1>/lib" ./configure 
-with-ssl=<somewhere1> --with-nghttp3=<somewhere2> --with-ngtcp2=<somewhere3>
+     % LDFLAGS="-Wl,-rpath,<somewhere1>/lib" ./configure 
--with-ssl=<somewhere1> --with-nghttp3=<somewhere2> --with-ngtcp2=<somewhere3>
      % make
 
-## Running
-
-Make sure the custom OpenSSL library is the one used at run-time, as otherwise
-you'll just get ld.so linker errors.
-
-## Invoke from command line
-
-    curl --http3 https://nghttp2.org:8443/
-
 # quiche version
 
 ## build
@@ -91,9 +82,9 @@ Build BoringSSL (it needs to be built manually so it can be 
reused with curl):
      % mkdir build
      % cd build
      % cmake -DCMAKE_POSITION_INDEPENDENT_CODE=on ..
-     % make -j`nproc`
+     % make
      % cd ..
-     % mkdir .openssl/lib -p
+     % mkdir -p .openssl/lib
      % cp build/crypto/libcrypto.a build/ssl/libssl.a .openssl/lib
      % ln -s $PWD/include .openssl
 
@@ -109,13 +100,16 @@ Clone and build curl:
      % cd curl
      % ./buildconf
      % ./configure LDFLAGS="-Wl,-rpath,$PWD/../quiche/target/release" 
--with-ssl=$PWD/../quiche/deps/boringssl/.openssl 
--with-quiche=$PWD/../quiche/target/release
-     % make -j`nproc`
+     % make
+
+## Run
+
+Use HTTP/3 directly:
+
+    curl --http3 https://nghttp2.org:8443/
 
-## Running
+Upgrade via Alt-Svc:
 
-Make an HTTP/3 request.
+    curl --alt-svc altsvc.cache https://quic.aiortc.org/
 
-     % src/curl --http3 https://cloudflare-quic.com/
-     % src/curl --http3 https://facebook.com/
-     % src/curl --http3 https://quic.aiortc.org:4433/
-     % src/curl --http3 https://quic.rocks:4433/
+See this [list of public HTTP/3 servers](https://bagder.github.io/HTTP3-test/)
diff --git a/docs/INSTALL.md b/docs/INSTALL.md
index d287d55e4..78d632c70 100644
--- a/docs/INSTALL.md
+++ b/docs/INSTALL.md
@@ -7,6 +7,18 @@ document does not describe how to install curl or libcurl 
using such a binary
 package. This document describes how to compile, build and install curl and
 libcurl from source code.
 
+## Building using vcpkg
+
+You can download and install curl and libcurl using the 
[vcpkg](https://github.com/Microsoft/vcpkg/) dependency manager:
+
+    git clone https://github.com/Microsoft/vcpkg.git
+    cd vcpkg
+    ./bootstrap-vcpkg.sh
+    ./vcpkg integrate install
+    vcpkg install curl[tool]
+
+The curl port in vcpkg is kept up to date by Microsoft team members and 
community contributors. If the version is out of date, please [create an issue 
or pull request](https://github.com/Microsoft/vcpkg) on the vcpkg repository.
+
 ## Building from git
 
 If you get your code off a git repository instead of a release tarball, see
@@ -56,12 +68,12 @@ you have pkg-config installed, set the pkg-config path 
first, like this:
 
 Without pkg-config installed, use this:
 
-   ./configure --with-ssl=/opt/OpenSSL
+    ./configure --with-ssl=/opt/OpenSSL
 
 If you insist on forcing a build without SSL support, even though you may
 have OpenSSL installed in your system, you can run configure like this:
 
-   ./configure --without-ssl
+    ./configure --without-ssl
 
 If you have OpenSSL installed, but with the libraries in one place and the
 header files somewhere else, you have to set the `LDFLAGS` and `CPPFLAGS`
diff --git a/docs/KNOWN_BUGS b/docs/KNOWN_BUGS
index 5850f7fbd..5134e7367 100644
--- a/docs/KNOWN_BUGS
+++ b/docs/KNOWN_BUGS
@@ -12,7 +12,6 @@ check the changelog of the current development status, as one 
or more of these
 problems may have been fixed or changed somewhat since this was written!
 
  1. HTTP
- 1.1 CURLFORM_CONTENTLEN in an array
  1.3 STARTTRANSFER time is wrong for HTTP POSTs
  1.4 multipart formposts file name encoding
  1.5 Expect-100 meets 417
@@ -55,6 +54,7 @@ problems may have been fixed or changed somewhat since this 
was written!
  5.7 Visual Studio project gaps
  5.8 configure finding libs in wrong directory
  5.9 Utilize Requires.private directives in libcurl.pc
+ 5.10 IDN tests failing on Windows / MSYS2
 
  6. Authentication
  6.1 NTLM authentication and unicode
@@ -101,6 +101,7 @@ problems may have been fixed or changed somewhat since this 
was written!
 
  12. LDAP and OpenLDAP
  12.1 OpenLDAP hangs after returning results
+ 12.2 LDAP on Windows does authentication wrong?
 
  13. TCP/IP
  13.1 --interface for ipv6 binds to unusable IP address
@@ -112,15 +113,6 @@ problems may have been fixed or changed somewhat since 
this was written!
 
 1. HTTP
 
-1.1 CURLFORM_CONTENTLEN in an array
-
- It is not possible to pass a 64-bit value using CURLFORM_CONTENTLEN with
- CURLFORM_ARRAY, when compiled on 32-bit platforms that support 64-bit
- integers. This is because the underlying structure 'curl_forms' uses a dual
- purpose char* for storing these values in via casting. For more information
- see the now closed related issue:
- https://github.com/curl/curl/issues/608
-
 1.3 STARTTRANSFER time is wrong for HTTP POSTs
 
  Wrong STARTTRANSFER timer accounting for POST requests Timer works fine with
@@ -428,6 +420,13 @@ problems may have been fixed or changed somewhat since 
this was written!
 
  https://github.com/curl/curl/issues/864
 
+5.10 IDN tests failing on Windows / MSYS2
+
+ It seems like MSYS2 does some UTF-8-to-something-else conversion for Windows
+ compatibility.
+
+ https://github.com/curl/curl/issues/3747
+
 6. Authentication
 
 6.1 NTLM authentication and unicode
@@ -725,6 +724,9 @@ problems may have been fixed or changed somewhat since this 
was written!
  See https://github.com/curl/curl/issues/622 and
      https://curl.haxx.se/mail/lib-2016-01/0101.html
 
+12.2 LDAP on Windows does authentication wrong?
+
+ https://github.com/curl/curl/issues/3116
 
 13. TCP/IP
 
diff --git a/docs/Makefile.am b/docs/Makefile.am
index de3487010..e0e3f7bfd 100644
--- a/docs/Makefile.am
+++ b/docs/Makefile.am
@@ -49,6 +49,7 @@ EXTRA_DIST =                                    \
  CODE_STYLE.md                                  \
  CONTRIBUTE.md                                  \
  DEPRECATE.md                                   \
+ ESNI.md                                        \
  EXPERIMENTAL.md                                \
  FAQ                                            \
  FEATURES                                       \
diff --git a/docs/THANKS b/docs/THANKS
index 73b84cfdb..884906ae2 100644
--- a/docs/THANKS
+++ b/docs/THANKS
@@ -51,6 +51,7 @@ Alex Chan
 Alex Fishman
 Alex Grebenschikov
 Alex Gruz
+Alex Konev
 Alex Malinovich
 Alex Mayorga
 Alex McLellan
@@ -58,6 +59,7 @@ Alex Neblett
 Alex Nichols
 Alex Potapenko
 Alex Rousskov
+Alex Samorukov
 Alex Suykov
 Alex Vinnik
 Alex aka WindEagle
@@ -116,6 +118,7 @@ Andrei Karas
 Andrei Kurushin
 Andrei Neculau
 Andrei Sedoi
+Andrei Valeriu BICA
 Andrei Virtosu
 Andrej E Baranov
 Andrew Benham
@@ -177,9 +180,11 @@ Balaji Salunke
 Balazs Kovacsics
 Balint Szilakszi
 Barry Abrahamson
+Barry Pollard
 Bart Whiteley
 Bas Mevissen
 Bas van Schaik
+Bastien Bouclet
 Basuke Suzuki
 Ben Boeckel
 Ben Darnell
@@ -257,6 +262,7 @@ Bruno Thomsen
 Bruno de Carvalho
 Bryan Henderson
 Bryan Kemp
+Bylon2 on github
 Byrial Jensen
 Caleb Raitto
 Cameron Kaiser
@@ -304,7 +310,9 @@ Christian Schmitz
 Christian Stewart
 Christian Vogt
 Christian Weisgerber
+Christoph M. Becker
 Christophe Demory
+Christophe Dervieux
 Christophe Legry
 Christopher Conroy
 Christopher Head
@@ -382,6 +390,7 @@ Daniel Romero
 Daniel Schauenberg
 Daniel Seither
 Daniel Shahaf
+Daniel Silverstone
 Daniel Steinberg
 Daniel Stenberg
 Daniel Theron
@@ -436,6 +445,7 @@ David Woodhouse
 David Wright
 David Yan
 Dengminwen
+Denis Chaplygin
 Denis Feklushkin
 Denis Ollier
 Dennis Clarke
@@ -520,6 +530,7 @@ Elliot Saba
 Ellis Pritchard
 Elmira A Semenova
 Emanuele Bovisio
+Emil Engler
 Emil Lerner
 Emil Romanus
 Emiliano Ida
@@ -589,6 +600,7 @@ Forrest Cahoon
 Francisco Moraes
 Francisco Sedano
 Francois Petitjean
+Francois Rivard
 Frank Denis
 Frank Gevaerts
 Frank Hempel
@@ -622,6 +634,7 @@ Georg Horn
 Georg Huettenegger
 Georg Lippitsch
 Georg Wicherski
+George Liu
 Gerd v. Egidy
 Gergely Nagy
 Gerhard Herre
@@ -633,6 +646,7 @@ Gil Weber
 Gilad
 Gilbert Ramirez Jr.
 Gilles Blanc
+Gilles Vollant
 Giorgos Oikonomou
 Gisle Vanem
 GitYuanQu on github
@@ -657,6 +671,7 @@ Greg Rowe
 Greg Zavertnik
 Gregory Nicholls
 Gregory Szorc
+Griffin Downs
 Grigory Entin
 Guenole Bescon
 Guido Berhoerster
@@ -727,6 +742,7 @@ Ihor Karpenko
 Iida Yosiaki
 Ilguiz Latypov
 Ilja van Sprundel
+Ilya Kosarev
 Immanuel Gregoire
 Inca R
 Ingmar Runge
@@ -744,6 +760,7 @@ Ivo Bellin Salarin
 Jack Zhang
 Jackarain on github
 Jacky Lam
+Jacob Barthelmeh
 Jacob Meuser
 Jacob Moshenko
 Jactry Zeng
@@ -813,6 +830,7 @@ Jeff Phillips
 Jeff Pohlmeyer
 Jeff Weber
 Jeffrey Walton
+Jens Finkhaeuser
 Jens Rantil
 Jens Schleusener
 Jeremie Rapin
@@ -840,6 +858,7 @@ Jim Freeman
 Jim Fuller
 Jim Hollinger
 Jim Meyering
+Jimmy Gaussen
 Jiri Dvorak
 Jiri Hruska
 Jiri Jaburek
@@ -890,6 +909,7 @@ John Weismiller
 John Wilkinson
 John-Mark Bell
 Johnny Luong
+Jojojov on github
 Jon DeVree
 Jon Grubbs
 Jon Nelson
@@ -1070,6 +1090,7 @@ Luca Altea
 Luca Boccassi
 Lucas Adamski
 Lucas Pardue
+Lucas Severo
 Ludek Finstrle
 Ludovico Cavedon
 Ludwig Nussel
@@ -1107,6 +1128,7 @@ Marc Kleine-Budde
 Marc Renault
 Marc Schlatter
 Marc-Antoine Perennou
+Marcel Hernandez
 Marcel Raad
 Marcel Roelofs
 Marcelo Echeverria
@@ -1151,6 +1173,7 @@ Martin Drasar
 Martin Dreher
 Martin Frodl
 Martin Galvan
+Martin Gartner
 Martin Hager
 Martin Hedenfalk
 Martin Jansen
@@ -1283,6 +1306,7 @@ Nate Prewitt
 Nathan Coulter
 Nathan O'Sullivan
 Nathanael Nerode
+Nathaniel J. Smith
 Nathaniel Waisbrot
 Naveen Chandran
 Naveen Noel
@@ -1292,6 +1316,7 @@ Neil Bowers
 Neil Dunbar
 Neil Kolban
 Neil Spring
+Niall O'Reilly
 Nic Roets
 Nicholas Maniscalco
 Nick Draffen
@@ -1370,7 +1395,9 @@ Patrick Smith
 Patrick Watson
 Patrik Thunstrom
 Pau Garcia i Quiles
+Paul B. Omta
 Paul Donohue
+Paul Dreik
 Paul Groke
 Paul Harrington
 Paul Harris
@@ -1415,6 +1442,7 @@ Peter Piekarski
 Peter Silva
 Peter Simonyi
 Peter Su
+Peter Sumatra
 Peter Sylvester
 Peter Todd
 Peter Varga
@@ -1438,6 +1466,7 @@ Philip Langdale
 Philip Prindeville
 Philipp Waehnert
 Philippe Hameau
+Philippe Marguinaud
 Philippe Raoult
 Philippe Vaucher
 Pierre
@@ -1446,6 +1475,7 @@ Pierre Chapuis
 Pierre Joye
 Pierre Ynard
 Piotr Dobrogost
+Piotr Komborski
 Po-Chuan Hsieh
 Pooyan McSporran
 Poul T Lomholt
@@ -1563,6 +1593,7 @@ Rodric Glaser
 Rodrigo Silva
 Roger Leigh
 Roland Blom
+Roland Hieber
 Roland Krikava
 Roland Zimmermann
 Rolf Eike Beer
@@ -1626,6 +1657,7 @@ Sean Burford
 Sean MacLennan
 Sean Miller
 Sebastiaan van Erk
+Sebastian Haglund
 Sebastian Mundry
 Sebastian Pohlschmidt
 Sebastian Rasmussen
@@ -1669,6 +1701,7 @@ Somnath Kundu
 Song Ma
 Sonia Subramanian
 Spacen Jasset
+Spezifant on github
 Spiridonoff A.V
 Spork Schivago
 Stadler Stephan
@@ -1714,8 +1747,10 @@ Steven G. Johnson
 Steven Gu
 Steven M. Schweda
 Steven Parkes
+Stian Soiland-Reyes
 Stoned Elipot
 Stuart Henderson
+SumatraPeter on github
 Sune Ahlgren
 Sunny Purushe
 Sven Anders
@@ -1827,6 +1862,7 @@ Toshiyuki Maezawa
 Traian Nicolescu
 Travis Burtrum
 Travis Obenhaus
+Trivikram Kamat
 Troels Walsted Hansen
 Troy Engel
 Tseng Jun
@@ -1840,6 +1876,7 @@ Ulrich Doehner
 Ulrich Telle
 Ulrich Zadow
 Valentin David
+Valerii Zapodovnikov
 Vasiliy Faronov
 Vasily Lobaskin
 Vasy Okhin
@@ -1850,6 +1887,7 @@ Victor Snezhko
 Vijay Panghal
 Vikram Saxena
 Viktor Szakats
+Vilhelm Prytz
 Ville Skyttä
 Vilmos Nebehaj
 Vincas Razma
@@ -1899,6 +1937,7 @@ Yang Tse
 Yarram Sunil
 Yasuharu Yamada
 Yasuhiro Matsumoto
+Yechiel Kalmenson
 Yehezkel Horowitz
 Yehoshua Hershberg
 Yi Huang
@@ -1966,6 +2005,7 @@ jonrumsey on github
 joshhe on github
 jungle-boogie on github
 jveazey on github
+jzinn on github
 ka7 on github
 kreshano on github
 l00p3r on Hackerone
@@ -1977,6 +2017,7 @@ masbug on github
 mccormickt12 on github
 migueljcrum on github
 mkzero on github
+momala454 on github
 moohoorama on github
 nedres on github
 neex on github
@@ -1984,6 +2025,7 @@ neheb on github
 nevv on HackerOne/curl
 niallor on github
 nianxuejie on github
+nico-abram on github
 niner on github
 nk
 nopjmp on github
diff --git a/docs/THANKS-filter b/docs/THANKS-filter
index d2adda578..cabc93933 100644
--- a/docs/THANKS-filter
+++ b/docs/THANKS-filter
@@ -25,7 +25,7 @@ s/upstream tests 305 and 404//
 s/Gaël PORTAY/Gaël Portay/
 s/Romulo Ceccon/Romulo A. Ceccon/
 s/Nach M. S$/Nach M. S./
-s/Jay Satiro/Ray Satiro/
+s/Ja[yt] Satiro/Ray Satiro/
 s/Richard J. Moore/Richard Moore/
 s/Sergey Nikulov/Sergei Nikulov/
 s/Petr Písař/Petr Pisar/
diff --git a/docs/TODO b/docs/TODO
index 6d30d26a4..42d37c1bc 100644
--- a/docs/TODO
+++ b/docs/TODO
@@ -18,6 +18,7 @@
 
  1. libcurl
  1.1 TFO support on Windows
+ 1.2 Consult %APPDATA% also for .netrc
  1.3 struct lifreq
  1.5 get rid of PATH_MAX
  1.7 Support HTTP/2 for HTTP(S) proxies
@@ -122,6 +123,7 @@
 
  17. SSH protocols
  17.1 Multiplexing
+ 17.2 Handle growing SFTP files
  17.3 Support better than MD5 hostkey hash
  17.4 Support CURLOPT_PREQUOTE
 
@@ -181,6 +183,12 @@
 
  See https://github.com/curl/curl/pull/3378
 
+1.2 Consult %APPDATA% also for .netrc
+
+ %APPDATA%\.netrc is not considered when running on Windows. Shouldn't it?
+
+ See https://github.com/curl/curl/issues/4016
+
 1.3 struct lifreq
 
  Use 'struct lifreq' and SIOCGLIFADDR instead of 'struct ifreq' and
@@ -792,6 +800,16 @@ that doesn't exist on the server, just like 
--ftp-create-dirs.
  To fix this, libcurl would have to detect an existing connection and "attach"
  the new transfer to the existing one.
 
+17.2 Handle growing SFTP files
+
+ The SFTP code in libcurl checks the file size *before* a transfer starts and
+ then proceeds to transfer exactly that amount of data. If the remote file
+ grows while the tranfer is in progress libcurl won't notice and will not
+ adapt. The OpenSSH SFTP command line tool does and libcurl could also just
+ attempt to download more to see if there is more to get...
+
+ https://github.com/curl/curl/issues/4344
+
 17.3 Support better than MD5 hostkey hash
 
  libcurl offers the CURLOPT_SSH_HOST_PUBLIC_KEY_MD5 option for verifying the
diff --git a/docs/cmdline-opts/Makefile.inc b/docs/cmdline-opts/Makefile.inc
index 6b4387475..c90e9c5fb 100644
--- a/docs/cmdline-opts/Makefile.inc
+++ b/docs/cmdline-opts/Makefile.inc
@@ -97,13 +97,14 @@ DPAGES =                                    \
   no-buffer.d                                  \
   no-keepalive.d                               \
   no-npn.d                                     \
+  no-progress-meter.d                           \
   no-sessionid.d                               \
   noproxy.d                                    \
   ntlm.d ntlm-wb.d                             \
   oauth2-bearer.d                              \
   output.d                                      \
-  pass.d                                       \
   parallel.d                                    \
+  pass.d                                       \
   parallel-max.d                                \
   path-as-is.d                                 \
   pinnedpubkey.d                               \
diff --git a/docs/cmdline-opts/no-progress-meter.d 
b/docs/cmdline-opts/no-progress-meter.d
new file mode 100644
index 000000000..aff0717d3
--- /dev/null
+++ b/docs/cmdline-opts/no-progress-meter.d
@@ -0,0 +1,10 @@
+Long: no-progress-meter
+Help: Do not show the progress meter
+See-also: verbose silent
+Added: 7.67.0
+---
+Option to switch off the progress meter output without muting or otherwise
+affecting warning and informational messages like --silent does.
+
+Note that this is the negated option name documented. You can thus use
+--progress-meter to enable the progress meter again.
diff --git a/docs/examples/Makefile.inc b/docs/examples/Makefile.inc
index 6fd8ecd76..f03fcf2f0 100644
--- a/docs/examples/Makefile.inc
+++ b/docs/examples/Makefile.inc
@@ -45,4 +45,4 @@ COMPLICATED_EXAMPLES = curlgtk.c curlx.c htmltitle.cpp 
cacertinmem.c \
   sampleconv.c synctime.c threaded-ssl.c evhiperfifo.c \
   smooth-gtk-thread.c version-check.pl href_extractor.c asiohiper.cpp \
   multi-uv.c xmlstream.c usercertinmem.c sessioninfo.c \
-  threaded-shared-conn.c crawler.c ephiperfifo.c
+  threaded-shared-conn.c crawler.c ephiperfifo.c multi-event.c
diff --git a/docs/examples/externalsocket.c b/docs/examples/externalsocket.c
index 516fa98b0..4bc12a325 100644
--- a/docs/examples/externalsocket.c
+++ b/docs/examples/externalsocket.c
@@ -5,7 +5,7 @@
  *                            | (__| |_| |  _ <| |___
  *                             \___|\___/|_| \_\_____|
  *
- * Copyright (C) 1998 - 2017, Daniel Stenberg, <address@hidden>, et al.
+ * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
  *
  * This software is licensed as described in the file COPYING, which
  * you should have received as part of this distribution. The terms
diff --git a/docs/examples/ftp-wildcard.c b/docs/examples/ftp-wildcard.c
index 7d98ef98b..e560731bf 100644
--- a/docs/examples/ftp-wildcard.c
+++ b/docs/examples/ftp-wildcard.c
@@ -5,7 +5,7 @@
  *                            | (__| |_| |  _ <| |___
  *                             \___|\___/|_| \_\_____|
  *
- * Copyright (C) 1998 - 2016, Daniel Stenberg, <address@hidden>, et al.
+ * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
  *
  * This software is licensed as described in the file COPYING, which
  * you should have received as part of this distribution. The terms
diff --git a/docs/examples/htmltidy.c b/docs/examples/htmltidy.c
index cdfc89dac..9d4ad1a5d 100644
--- a/docs/examples/htmltidy.c
+++ b/docs/examples/htmltidy.c
@@ -5,7 +5,7 @@
  *                            | (__| |_| |  _ <| |___
  *                             \___|\___/|_| \_\_____|
  *
- * Copyright (C) 1998 - 2017, Daniel Stenberg, <address@hidden>, et al.
+ * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
  *
  * This software is licensed as described in the file COPYING, which
  * you should have received as part of this distribution. The terms
diff --git a/docs/examples/htmltitle.cpp b/docs/examples/htmltitle.cpp
index bfa27303e..8dff811e8 100644
--- a/docs/examples/htmltitle.cpp
+++ b/docs/examples/htmltitle.cpp
@@ -5,7 +5,7 @@
  *                            | (__| |_| |  _ <| |___
  *                             \___|\___/|_| \_\_____|
  *
- * Copyright (C) 1998 - 2017, Daniel Stenberg, <address@hidden>, et al.
+ * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
  *
  * This software is licensed as described in the file COPYING, which
  * you should have received as part of this distribution. The terms
diff --git a/docs/examples/http2-upload.c b/docs/examples/http2-upload.c
index bdec19b4a..5f32faaba 100644
--- a/docs/examples/http2-upload.c
+++ b/docs/examples/http2-upload.c
@@ -5,7 +5,7 @@
  *                            | (__| |_| |  _ <| |___
  *                             \___|\___/|_| \_\_____|
  *
- * Copyright (C) 1998 - 2018, Daniel Stenberg, <address@hidden>, et al.
+ * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
  *
  * This software is licensed as described in the file COPYING, which
  * you should have received as part of this distribution. The terms
diff --git a/docs/examples/imap-append.c b/docs/examples/imap-append.c
index 6c903dd4c..56704d4c6 100644
--- a/docs/examples/imap-append.c
+++ b/docs/examples/imap-append.c
@@ -5,7 +5,7 @@
  *                            | (__| |_| |  _ <| |___
  *                             \___|\___/|_| \_\_____|
  *
- * Copyright (C) 1998 - 2016, Daniel Stenberg, <address@hidden>, et al.
+ * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
  *
  * This software is licensed as described in the file COPYING, which
  * you should have received as part of this distribution. The terms
diff --git a/docs/examples/multi-app.c b/docs/examples/multi-app.c
index 1b8fa30e1..6359ee7c7 100644
--- a/docs/examples/multi-app.c
+++ b/docs/examples/multi-app.c
@@ -5,7 +5,7 @@
  *                            | (__| |_| |  _ <| |___
  *                             \___|\___/|_| \_\_____|
  *
- * Copyright (C) 1998 - 2018, Daniel Stenberg, <address@hidden>, et al.
+ * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
  *
  * This software is licensed as described in the file COPYING, which
  * you should have received as part of this distribution. The terms
diff --git a/docs/examples/multi-uv.c b/docs/examples/multi-event.c
similarity index 78%
copy from docs/examples/multi-uv.c
copy to docs/examples/multi-event.c
index a76201496..3ec16b157 100644
--- a/docs/examples/multi-uv.c
+++ b/docs/examples/multi-event.c
@@ -5,7 +5,7 @@
  *                            | (__| |_| |  _ <| |___
  *                             \___|\___/|_| \_\_____|
  *
- * Copyright (C) 1998 - 2018, Daniel Stenberg, <address@hidden>, et al.
+ * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
  *
  * This software is licensed as described in the file COPYING, which
  * you should have received as part of this distribution. The terms
@@ -21,31 +21,26 @@
  ***************************************************************************/
 
 /* <DESC>
- * multi_socket API using libuv
+ * multi_socket API using libevent
  * </DESC>
  */
-/* Example application using the multi socket interface to download multiple
-   files in parallel, powered by libuv.
-
-   Requires libuv and (of course) libcurl.
-
-   See https://nikhilm.github.io/uvbook/ for more information on libuv.
-*/
 
 #include <stdio.h>
 #include <stdlib.h>
-#include <uv.h>
+#include <event2/event.h>
 #include <gnurl/curl.h>
 
-uv_loop_t *loop;
+struct event_base *base;
 CURLM *curl_handle;
-uv_timer_t timeout;
+struct event *timeout;
 
 typedef struct curl_context_s {
-  uv_poll_t poll_handle;
+  struct event *event;
   curl_socket_t sockfd;
 } curl_context_t;
 
+static void curl_perform(int fd, short event, void *arg);
+
 static curl_context_t* create_curl_context(curl_socket_t sockfd)
 {
   curl_context_t *context;
@@ -54,21 +49,16 @@ static curl_context_t* create_curl_context(curl_socket_t 
sockfd)
 
   context->sockfd = sockfd;
 
-  uv_poll_init_socket(loop, &context->poll_handle, sockfd);
-  context->poll_handle.data = context;
+  context->event = event_new(base, sockfd, 0, curl_perform, context);
 
   return context;
 }
 
-static void curl_close_cb(uv_handle_t *handle)
-{
-  curl_context_t *context = (curl_context_t *) handle->data;
-  free(context);
-}
-
 static void destroy_curl_context(curl_context_t *context)
 {
-  uv_close((uv_handle_t *) &context->poll_handle, curl_close_cb);
+  event_del(context->event);
+  event_free(context->event);
+  free(context);
 }
 
 static void add_download(const char *url, int num)
@@ -129,18 +119,18 @@ static void check_multi_info(void)
   }
 }
 
-static void curl_perform(uv_poll_t *req, int status, int events)
+static void curl_perform(int fd, short event, void *arg)
 {
   int running_handles;
   int flags = 0;
   curl_context_t *context;
 
-  if(events & UV_READABLE)
+  if(event & EV_READ)
     flags |= CURL_CSELECT_IN;
-  if(events & UV_WRITABLE)
+  if(event & EV_WRITE)
     flags |= CURL_CSELECT_OUT;
 
-  context = (curl_context_t *) req->data;
+  context = (curl_context_t *) arg;
 
   curl_multi_socket_action(curl_handle, context->sockfd, flags,
                            &running_handles);
@@ -148,7 +138,7 @@ static void curl_perform(uv_poll_t *req, int status, int 
events)
   check_multi_info();
 }
 
-static void on_timeout(uv_timer_t *req)
+static void on_timeout(evutil_socket_t fd, short events, void *arg)
 {
   int running_handles;
   curl_multi_socket_action(curl_handle, CURL_SOCKET_TIMEOUT, 0,
@@ -159,13 +149,17 @@ static void on_timeout(uv_timer_t *req)
 static int start_timeout(CURLM *multi, long timeout_ms, void *userp)
 {
   if(timeout_ms < 0) {
-    uv_timer_stop(&timeout);
+    evtimer_del(timeout);
   }
   else {
     if(timeout_ms == 0)
       timeout_ms = 1; /* 0 means directly call socket_action, but we'll do it
                          in a bit */
-    uv_timer_start(&timeout, on_timeout, timeout_ms, 0);
+    struct timeval tv;
+    tv.tv_sec = timeout_ms / 1000;
+    tv.tv_usec = (timeout_ms % 1000) * 1000;
+    evtimer_del(timeout);
+    evtimer_add(timeout, &tv);
   }
   return 0;
 }
@@ -186,15 +180,21 @@ static int handle_socket(CURL *easy, curl_socket_t s, int 
action, void *userp,
     curl_multi_assign(curl_handle, s, (void *) curl_context);
 
     if(action != CURL_POLL_IN)
-      events |= UV_WRITABLE;
+      events |= EV_WRITE;
     if(action != CURL_POLL_OUT)
-      events |= UV_READABLE;
+      events |= EV_READ;
+
+    events |= EV_PERSIST;
+
+    event_del(curl_context->event);
+    event_assign(curl_context->event, base, curl_context->sockfd, events,
+      curl_perform, curl_context);
+    event_add(curl_context->event, NULL);
 
-    uv_poll_start(&curl_context->poll_handle, events, curl_perform);
     break;
   case CURL_POLL_REMOVE:
     if(socketp) {
-      uv_poll_stop(&((curl_context_t*)socketp)->poll_handle);
+      event_del(((curl_context_t*) socketp)->event);
       destroy_curl_context((curl_context_t*) socketp);
       curl_multi_assign(curl_handle, s, NULL);
     }
@@ -208,8 +208,6 @@ static int handle_socket(CURL *easy, curl_socket_t s, int 
action, void *userp,
 
 int main(int argc, char **argv)
 {
-  loop = uv_default_loop();
-
   if(argc <= 1)
     return 0;
 
@@ -218,7 +216,8 @@ int main(int argc, char **argv)
     return 1;
   }
 
-  uv_timer_init(loop, &timeout);
+  base = event_base_new();
+  timeout = evtimer_new(base, on_timeout, NULL);
 
   curl_handle = curl_multi_init();
   curl_multi_setopt(curl_handle, CURLMOPT_SOCKETFUNCTION, handle_socket);
@@ -228,8 +227,14 @@ int main(int argc, char **argv)
     add_download(argv[argc], argc);
   }
 
-  uv_run(loop, UV_RUN_DEFAULT);
+  event_base_dispatch(base);
+
   curl_multi_cleanup(curl_handle);
+  event_free(timeout);
+  event_base_free(base);
+
+  libevent_global_shutdown();
+  curl_global_cleanup();
 
   return 0;
 }
diff --git a/docs/examples/multithread.c b/docs/examples/multithread.c
index fb29c2492..4ab844bb5 100644
--- a/docs/examples/multithread.c
+++ b/docs/examples/multithread.c
@@ -5,7 +5,7 @@
  *                            | (__| |_| |  _ <| |___
  *                             \___|\___/|_| \_\_____|
  *
- * Copyright (C) 1998 - 2017, Daniel Stenberg, <address@hidden>, et al.
+ * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
  *
  * This software is licensed as described in the file COPYING, which
  * you should have received as part of this distribution. The terms
diff --git a/docs/examples/postit2-formadd.c b/docs/examples/postit2-formadd.c
index dd2bd9660..af6348b25 100644
--- a/docs/examples/postit2-formadd.c
+++ b/docs/examples/postit2-formadd.c
@@ -5,7 +5,7 @@
  *                            | (__| |_| |  _ <| |___
  *                             \___|\___/|_| \_\_____|
  *
- * Copyright (C) 1998 - 2018, Daniel Stenberg, <address@hidden>, et al.
+ * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
  *
  * This software is licensed as described in the file COPYING, which
  * you should have received as part of this distribution. The terms
@@ -34,7 +34,6 @@
  * <input type="submit" value="send" name="submit">
  * </form>
  *
- * This exact source code has not been verified to work.
  */
 
 #include <stdio.h>
diff --git a/docs/examples/postit2.c b/docs/examples/postit2.c
index 81d61c489..de352d3df 100644
--- a/docs/examples/postit2.c
+++ b/docs/examples/postit2.c
@@ -5,7 +5,7 @@
  *                            | (__| |_| |  _ <| |___
  *                             \___|\___/|_| \_\_____|
  *
- * Copyright (C) 1998 - 2018, Daniel Stenberg, <address@hidden>, et al.
+ * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
  *
  * This software is licensed as described in the file COPYING, which
  * you should have received as part of this distribution. The terms
@@ -34,7 +34,6 @@
  * <input type="submit" value="send" name="submit">
  * </form>
  *
- * This exact source code has not been verified to work.
  */
 
 #include <stdio.h>
diff --git a/docs/examples/resolve.c b/docs/examples/resolve.c
index 417046076..75ee4e983 100644
--- a/docs/examples/resolve.c
+++ b/docs/examples/resolve.c
@@ -5,7 +5,7 @@
  *                            | (__| |_| |  _ <| |___
  *                             \___|\___/|_| \_\_____|
  *
- * Copyright (C) 1998 - 2018, Daniel Stenberg, <address@hidden>, et al.
+ * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
  *
  * This software is licensed as described in the file COPYING, which
  * you should have received as part of this distribution. The terms
diff --git a/docs/examples/sampleconv.c b/docs/examples/sampleconv.c
index efa675665..97914ed8c 100644
--- a/docs/examples/sampleconv.c
+++ b/docs/examples/sampleconv.c
@@ -5,7 +5,7 @@
  *                            | (__| |_| |  _ <| |___
  *                             \___|\___/|_| \_\_____|
  *
- * Copyright (C) 1998 - 2018, Daniel Stenberg, <address@hidden>, et al.
+ * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
  *
  * This software is licensed as described in the file COPYING, which
  * you should have received as part of this distribution. The terms
diff --git a/docs/examples/sendrecv.c b/docs/examples/sendrecv.c
index 9c8f12a2b..b9c6bbc8f 100644
--- a/docs/examples/sendrecv.c
+++ b/docs/examples/sendrecv.c
@@ -5,7 +5,7 @@
  *                            | (__| |_| |  _ <| |___
  *                             \___|\___/|_| \_\_____|
  *
- * Copyright (C) 1998 - 2018, Daniel Stenberg, <address@hidden>, et al.
+ * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
  *
  * This software is licensed as described in the file COPYING, which
  * you should have received as part of this distribution. The terms
diff --git a/docs/examples/shared-connection-cache.c 
b/docs/examples/shared-connection-cache.c
index edf6c827c..f1073b1fb 100644
--- a/docs/examples/shared-connection-cache.c
+++ b/docs/examples/shared-connection-cache.c
@@ -5,7 +5,7 @@
  *                            | (__| |_| |  _ <| |___
  *                             \___|\___/|_| \_\_____|
  *
- * Copyright (C) 1998 - 2017, Daniel Stenberg, <address@hidden>, et al.
+ * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
  *
  * This software is licensed as described in the file COPYING, which
  * you should have received as part of this distribution. The terms
diff --git a/docs/examples/smooth-gtk-thread.c 
b/docs/examples/smooth-gtk-thread.c
index 8b11d0fbc..af8c0ca6e 100644
--- a/docs/examples/smooth-gtk-thread.c
+++ b/docs/examples/smooth-gtk-thread.c
@@ -5,7 +5,7 @@
  *                            | (__| |_| |  _ <| |___
  *                             \___|\___/|_| \_\_____|
  *
- * Copyright (C) 1998 - 2017, Daniel Stenberg, <address@hidden>, et al.
+ * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
  *
  * This software is licensed as described in the file COPYING, which
  * you should have received as part of this distribution. The terms
diff --git a/docs/examples/smtp-mime.c b/docs/examples/smtp-mime.c
index ff54d04ca..91fa89927 100644
--- a/docs/examples/smtp-mime.c
+++ b/docs/examples/smtp-mime.c
@@ -5,7 +5,7 @@
  *                            | (__| |_| |  _ <| |___
  *                             \___|\___/|_| \_\_____|
  *
- * Copyright (C) 1998 - 2017, Daniel Stenberg, <address@hidden>, et al.
+ * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
  *
  * This software is licensed as described in the file COPYING, which
  * you should have received as part of this distribution. The terms
diff --git a/docs/examples/sslbackend.c b/docs/examples/sslbackend.c
index 6fc078181..b19e3dccd 100644
--- a/docs/examples/sslbackend.c
+++ b/docs/examples/sslbackend.c
@@ -5,7 +5,7 @@
  *                            | (__| |_| |  _ <| |___
  *                             \___|\___/|_| \_\_____|
  *
- * Copyright (C) 1998 - 2017, Daniel Stenberg, <address@hidden>, et al.
+ * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
  *
  * This software is licensed as described in the file COPYING, which
  * you should have received as part of this distribution. The terms
@@ -56,7 +56,7 @@ int main(int argc, char **argv)
 
     return 0;
   }
-  else if(isdigit(*name)) {
+  else if(isdigit((int)(unsigned char)*name)) {
     int id = atoi(name);
 
     result = curl_global_sslset((curl_sslbackend)id, NULL, NULL);
diff --git a/docs/examples/synctime.c b/docs/examples/synctime.c
index f6a318c4a..2806965b5 100644
--- a/docs/examples/synctime.c
+++ b/docs/examples/synctime.c
@@ -5,7 +5,7 @@
  *                            | (__| |_| |  _ <| |___
  *                             \___|\___/|_| \_\_____|
  *
- * Copyright (C) 1998 - 2018, Daniel Stenberg, <address@hidden>, et al.
+ * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
  *
  * This software is licensed as described in the file COPYING, which
  * you should have received as part of this distribution. The terms
diff --git a/docs/examples/threaded-shared-conn.c 
b/docs/examples/threaded-shared-conn.c
index 0e2c9d16d..baa83b4e7 100644
--- a/docs/examples/threaded-shared-conn.c
+++ b/docs/examples/threaded-shared-conn.c
@@ -5,7 +5,7 @@
  *                            | (__| |_| |  _ <| |___
  *                             \___|\___/|_| \_\_____|
  *
- * Copyright (C) 1998 - 2017, Daniel Stenberg, <address@hidden>, et al.
+ * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
  *
  * This software is licensed as described in the file COPYING, which
  * you should have received as part of this distribution. The terms
diff --git a/docs/examples/threaded-ssl.c b/docs/examples/threaded-ssl.c
index 902ea7dc4..cef0cf470 100644
--- a/docs/examples/threaded-ssl.c
+++ b/docs/examples/threaded-ssl.c
@@ -5,7 +5,7 @@
  *                            | (__| |_| |  _ <| |___
  *                             \___|\___/|_| \_\_____|
  *
- * Copyright (C) 1998 - 2017, Daniel Stenberg, <address@hidden>, et al.
+ * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
  *
  * This software is licensed as described in the file COPYING, which
  * you should have received as part of this distribution. The terms
diff --git a/docs/libcurl/gnurl_multi_perform.3 
b/docs/libcurl/gnurl_multi_perform.3
index 836e9b580..f144babb4 100644
--- a/docs/libcurl/gnurl_multi_perform.3
+++ b/docs/libcurl/gnurl_multi_perform.3
@@ -5,7 +5,7 @@
 .\" *                            | (__| |_| |  _ <| |___
 .\" *                             \___|\___/|_| \_\_____|
 .\" *
-.\" * Copyright (C) 1998 - 2015, Daniel Stenberg, <address@hidden>, et al.
+.\" * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
 .\" *
 .\" * This software is licensed as described in the file COPYING, which
 .\" * you should have received as part of this distribution. The terms
@@ -46,6 +46,8 @@ know that there is one or more transfers less "running". You 
can then call
 \fIcurl_multi_info_read(3)\fP to get information about each individual
 completed transfer, and that returned info includes CURLcode and more. If an
 added handle fails very quickly, it may never be counted as a running_handle.
+You could use \fIcurl_multi_info_read(3)\fP to track actual status of the
+added handles in that case.
 
 When \fIrunning_handles\fP is set to zero (0) on the return of this function,
 there is no longer any transfers in progress.
diff --git a/docs/libcurl/gnurl_multi_setopt.3 
b/docs/libcurl/gnurl_multi_setopt.3
index 71b976236..dd141be0c 100644
--- a/docs/libcurl/gnurl_multi_setopt.3
+++ b/docs/libcurl/gnurl_multi_setopt.3
@@ -5,7 +5,7 @@
 .\" *                            | (__| |_| |  _ <| |___
 .\" *                             \___|\___/|_| \_\_____|
 .\" *
-.\" * Copyright (C) 1998 - 2015, Daniel Stenberg, <address@hidden>, et al.
+.\" * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
 .\" *
 .\" * This software is licensed as described in the file COPYING, which
 .\" * you should have received as part of this distribution. The terms
@@ -67,6 +67,8 @@ See \fICURLMOPT_SOCKETDATA(3)\fP
 See \fICURLMOPT_TIMERFUNCTION(3)\fP
 .IP CURLMOPT_TIMERDATA
 See \fICURLMOPT_TIMERDATA(3)\fP
+.IP CURLMOPT_MAX_CONCURRENT_STREAMS
+See \fICURLMOPT_MAX_CONCURRENT_STREAMS(3)\fP
 .SH RETURNS
 The standard CURLMcode for multi interface error codes. Note that it returns a
 CURLM_UNKNOWN_OPTION if you try setting an option that this version of libcurl
diff --git a/docs/libcurl/gnurl_multi_wait.3 b/docs/libcurl/gnurl_multi_wait.3
index d91481ab7..34d7c0411 100644
--- a/docs/libcurl/gnurl_multi_wait.3
+++ b/docs/libcurl/gnurl_multi_wait.3
@@ -5,7 +5,7 @@
 .\" *                            | (__| |_| |  _ <| |___
 .\" *                             \___|\___/|_| \_\_____|
 .\" *
-.\" * Copyright (C) 1998 - 2016, Daniel Stenberg, <address@hidden>, et al.
+.\" * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
 .\" *
 .\" * This software is licensed as described in the file COPYING, which
 .\" * you should have received as part of this distribution. The terms
diff --git a/docs/libcurl/gnurl_url_get.3 b/docs/libcurl/gnurl_url_get.3
index 7bbc0a09b..20edd1427 100644
--- a/docs/libcurl/gnurl_url_get.3
+++ b/docs/libcurl/gnurl_url_get.3
@@ -76,8 +76,9 @@ Scheme cannot be URL decoded on get.
 .IP CURLUPART_PASSWORD
 .IP CURLUPART_OPTIONS
 .IP CURLUPART_HOST
-If the host part is an IPv6 numeric address, the zoneid will not be part of
-the extracted host but is provided separately in \fICURLUPART_ZONEID\fP.
+The host name. If it is an IPv6 numeric address, the zoneid will not be part
+of it but is provided separately in \fICURLUPART_ZONEID\fP. IPv6 numerical
+addresses are returned within brackets ([]).
 .IP CURLUPART_ZONEID
 If the host name is a numeric IPv6 address, this field might also be set.
 .IP CURLUPART_PORT
diff --git a/docs/libcurl/gnurl_url_set.3 b/docs/libcurl/gnurl_url_set.3
index cbeff1b2c..5ef5cf34c 100644
--- a/docs/libcurl/gnurl_url_set.3
+++ b/docs/libcurl/gnurl_url_set.3
@@ -60,8 +60,9 @@ Scheme cannot be URL decoded on set.
 .IP CURLUPART_PASSWORD
 .IP CURLUPART_OPTIONS
 .IP CURLUPART_HOST
-The host name can use IDNA. The string must then be encoded as your locale
-says or UTF-8 (when winidn is used).
+The host name. If it is IDNA the string must then be encoded as your locale
+says or UTF-8 (when WinIDN is used). If it is a bracketed IPv6 numeric address
+it may contain a zone id (or you can use CURLUPART_ZONEID).
 .IP CURLUPART_ZONEID
 If the host name is a numeric IPv6 address, this field can also be set.
 .IP CURLUPART_PORT
@@ -111,6 +112,12 @@ instead "guesses" which scheme that was intended based on 
the host name.  If
 the outermost sub-domain name matches DICT, FTP, IMAP, LDAP, POP3 or SMTP then
 that scheme will be used, otherwise it picks HTTP. Conflicts with the
 \fICURLU_DEFAULT_SCHEME\fP option which takes precedence if both are set.
+.IP CURLU_NO_AUTHORITY
+If set, skips authority checks. The RFC allows individual schemes to omit the
+host part (normally the only mandatory part of the authority), but libcurl
+cannot know whether this is permitted for custom schemes. Specifying the flag
+permits empty authority sections, similar to how file scheme is handled.
+
 .SH RETURN VALUE
 Returns a CURLUcode error value, which is CURLUE_OK (0) if everything went
 fine.
diff --git a/docs/libcurl/libgnurl-errors.3 b/docs/libcurl/libgnurl-errors.3
index 2697efd5b..f3c714aab 100644
--- a/docs/libcurl/libgnurl-errors.3
+++ b/docs/libcurl/libgnurl-errors.3
@@ -5,7 +5,7 @@
 .\" *                            | (__| |_| |  _ <| |___
 .\" *                             \___|\___/|_| \_\_____|
 .\" *
-.\" * Copyright (C) 1998 - 2018, Daniel Stenberg, <address@hidden>, et al.
+.\" * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
 .\" *
 .\" * This software is licensed as described in the file COPYING, which
 .\" * you should have received as part of this distribution. The terms
diff --git a/docs/libcurl/libgnurl-multi.3 b/docs/libcurl/libgnurl-multi.3
index 96013552f..5d7c27b95 100644
--- a/docs/libcurl/libgnurl-multi.3
+++ b/docs/libcurl/libgnurl-multi.3
@@ -5,7 +5,7 @@
 .\" *                            | (__| |_| |  _ <| |___
 .\" *                             \___|\___/|_| \_\_____|
 .\" *
-.\" * Copyright (C) 1998 - 2018, Daniel Stenberg, <address@hidden>, et al.
+.\" * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
 .\" *
 .\" * This software is licensed as described in the file COPYING, which
 .\" * you should have received as part of this distribution. The terms
@@ -97,8 +97,7 @@ period for your select() calls.
 \fIcurl_multi_perform(3)\fP stores the number of still running transfers in
 one of its input arguments, and by reading that you can figure out when all
 the transfers in the multi handles are done. 'done' does not mean
-successful. One or more of the transfers may have failed. Tracking when this
-number changes, you know when one or more transfers are done.
+successful. One or more of the transfers may have failed. 
 
 To get information about completed transfers, to figure out success or not and
 similar, \fIcurl_multi_info_read(3)\fP should be called. It can return a
diff --git a/docs/libcurl/libgnurl-tutorial.3 b/docs/libcurl/libgnurl-tutorial.3
index c06e37760..c6f88d659 100644
--- a/docs/libcurl/libgnurl-tutorial.3
+++ b/docs/libcurl/libgnurl-tutorial.3
@@ -5,7 +5,7 @@
 .\" *                            | (__| |_| |  _ <| |___
 .\" *                             \___|\___/|_| \_\_____|
 .\" *
-.\" * Copyright (C) 1998 - 2018, Daniel Stenberg, <address@hidden>, et al.
+.\" * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
 .\" *
 .\" * This software is licensed as described in the file COPYING, which
 .\" * you should have received as part of this distribution. The terms
diff --git a/docs/libcurl/opts/GNURLMOPT_PIPELINING_SITE_BL.3 
b/docs/libcurl/opts/CURLMOPT_MAX_CONCURRENT_STREAMS.3
similarity index 60%
copy from docs/libcurl/opts/GNURLMOPT_PIPELINING_SITE_BL.3
copy to docs/libcurl/opts/CURLMOPT_MAX_CONCURRENT_STREAMS.3
index 89cb1b0ed..ad32cf8e8 100644
--- a/docs/libcurl/opts/GNURLMOPT_PIPELINING_SITE_BL.3
+++ b/docs/libcurl/opts/CURLMOPT_MAX_CONCURRENT_STREAMS.3
@@ -20,39 +20,37 @@
 .\" *
 .\" **************************************************************************
 .\"
-.TH GNURLMOPT_PIPELINING_SITE_BL 3 "4 Nov 2014" "libcurl 7.39.0" 
"curl_multi_setopt options"
+.TH GNURLMOPT_MAX_CONCURRENT_STREAMS 3 "06 Nov 2019" "libcurl 7.67.0" 
"curl_multi_setopt options"
 .SH NAME
-CURLMOPT_PIPELINING_SITE_BL \- pipelining host blacklist
+CURLMOPT_MAX_CONCURRENT_STREAMS \- set max concurrent streams for http2
 .SH SYNOPSIS
+.nf
 #include <gnurl/curl.h>
 
-CURLMcode curl_multi_setopt(CURLM *handle, CURLMOPT_PIPELINING_SITE_BL, char 
**hosts);
+CURLMcode curl_multi_setopt(CURLM *handle, CURLMOPT_MAX_CONCURRENT_STREAMS,
+                            long max);
+.fi
 .SH DESCRIPTION
-No function since pipelining was removed in 7.62.0.
+Pass a long indicating the \fBmax\fP. The set number will be used as the
+maximum number of concurrent streams for a connections that libcurl should
+support on connections done using HTTP/2.
 
-Pass a \fBhosts\fP array of char *, ending with a NULL entry. This is a list
-of sites that are blacklisted from pipelining, i.e sites that are known to not
-support HTTP pipelining. The array is copied by libcurl.
-
-Pass a NULL pointer to clear the blacklist.
+Valid values range from 1 to 2147483647 (2^31 - 1) and defaults to 100.  The
+value passed here would be honoured based on other system resources
+properties.
 .SH DEFAULT
-The default value is NULL, which means that there is no blacklist.
+100
 .SH PROTOCOLS
-HTTP(S)
+All
 .SH EXAMPLE
 .nf
-  site_blacklist[] =
-  {
-    "www.haxx.se",
-    "www.example.com:1234",
-    NULL
-  };
-
-  curl_multi_setopt(m, CURLMOPT_PIPELINING_SITE_BL, site_blacklist);
+  CURLM *m = curl_multi_init();
+  /* max concurrent streams 200 */
+  curl_multi_setopt(m, CURLMOPT_MAX_CONCURRENT_STREAMS, 200L);
 .fi
 .SH AVAILABILITY
-Added in 7.30.0
+Added in 7.67.0
 .SH RETURN VALUE
 Returns CURLM_OK if the option is supported, and CURLM_UNKNOWN_OPTION if not.
 .SH "SEE ALSO"
-.BR CURLMOPT_PIPELINING "(3), " CURLMOPT_PIPELINING_SERVER_BL "(3), "
+.BR CURLOPT_MAXCONNECTS "(3), " CURLMOPT_MAXCONNECTS "(3), "
diff --git a/docs/libcurl/opts/GNURLOPT_CURLU.3 
b/docs/libcurl/opts/GNURLOPT_CURLU.3
index bdcb05544..4721ed6ac 100644
--- a/docs/libcurl/opts/GNURLOPT_CURLU.3
+++ b/docs/libcurl/opts/GNURLOPT_CURLU.3
@@ -5,7 +5,7 @@
 .\" *                            | (__| |_| |  _ <| |___
 .\" *                             \___|\___/|_| \_\_____|
 .\" *
-.\" * Copyright (C) 1998 - 2018, Daniel Stenberg, <address@hidden>, et al.
+.\" * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
 .\" *
 .\" * This software is licensed as described in the file COPYING, which
 .\" * you should have received as part of this distribution. The terms
diff --git a/docs/libcurl/opts/GNURLOPT_FOLLOWLOCATION.3 
b/docs/libcurl/opts/GNURLOPT_FOLLOWLOCATION.3
index 2b6372bd3..bb64da48c 100644
--- a/docs/libcurl/opts/GNURLOPT_FOLLOWLOCATION.3
+++ b/docs/libcurl/opts/GNURLOPT_FOLLOWLOCATION.3
@@ -5,7 +5,7 @@
 .\" *                            | (__| |_| |  _ <| |___
 .\" *                             \___|\___/|_| \_\_____|
 .\" *
-.\" * Copyright (C) 1998 - 2015, Daniel Stenberg, <address@hidden>, et al.
+.\" * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
 .\" *
 .\" * This software is licensed as described in the file COPYING, which
 .\" * you should have received as part of this distribution. The terms
diff --git a/docs/libcurl/opts/GNURLOPT_HEADERFUNCTION.3 
b/docs/libcurl/opts/GNURLOPT_HEADERFUNCTION.3
index 5a569fef9..be9cb99b6 100644
--- a/docs/libcurl/opts/GNURLOPT_HEADERFUNCTION.3
+++ b/docs/libcurl/opts/GNURLOPT_HEADERFUNCTION.3
@@ -5,7 +5,7 @@
 .\" *                            | (__| |_| |  _ <| |___
 .\" *                             \___|\___/|_| \_\_____|
 .\" *
-.\" * Copyright (C) 1998 - 2018, Daniel Stenberg, <address@hidden>, et al.
+.\" * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
 .\" *
 .\" * This software is licensed as described in the file COPYING, which
 .\" * you should have received as part of this distribution. The terms
diff --git a/docs/libcurl/opts/GNURLOPT_HEADEROPT.3 
b/docs/libcurl/opts/GNURLOPT_HEADEROPT.3
index eaea05dff..1171657fd 100644
--- a/docs/libcurl/opts/GNURLOPT_HEADEROPT.3
+++ b/docs/libcurl/opts/GNURLOPT_HEADEROPT.3
@@ -5,7 +5,7 @@
 .\" *                            | (__| |_| |  _ <| |___
 .\" *                             \___|\___/|_| \_\_____|
 .\" *
-.\" * Copyright (C) 1998 - 2017, Daniel Stenberg, <address@hidden>, et al.
+.\" * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
 .\" *
 .\" * This software is licensed as described in the file COPYING, which
 .\" * you should have received as part of this distribution. The terms
diff --git a/docs/libcurl/opts/GNURLOPT_HTTP_VERSION.3 
b/docs/libcurl/opts/GNURLOPT_HTTP_VERSION.3
index 260363e16..2782e69a2 100644
--- a/docs/libcurl/opts/GNURLOPT_HTTP_VERSION.3
+++ b/docs/libcurl/opts/GNURLOPT_HTTP_VERSION.3
@@ -62,7 +62,7 @@ TLS handshake. (Added in 7.49.0)
 directly to server given in the URL. Note that this cannot gracefully
 downgrade to earlier HTTP version if the server doesn't support HTTP/3.
 
-For more reliably upgrading to HTTP/3, set the prefered version to something
+For more reliably upgrading to HTTP/3, set the preferred version to something
 lower and let the server announce its HTTP/3 support via Alt-Svc:. See
 \fICURLOPT_ALTSVC(3)\fP.
 .SH DEFAULT
diff --git a/docs/libcurl/opts/GNURLOPT_LOCALPORT.3 
b/docs/libcurl/opts/GNURLOPT_LOCALPORT.3
index 1a8be59fc..3376873e8 100644
--- a/docs/libcurl/opts/GNURLOPT_LOCALPORT.3
+++ b/docs/libcurl/opts/GNURLOPT_LOCALPORT.3
@@ -5,7 +5,7 @@
 .\" *                            | (__| |_| |  _ <| |___
 .\" *                             \___|\___/|_| \_\_____|
 .\" *
-.\" * Copyright (C) 1998 - 2017, Daniel Stenberg, <address@hidden>, et al.
+.\" * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
 .\" *
 .\" * This software is licensed as described in the file COPYING, which
 .\" * you should have received as part of this distribution. The terms
diff --git a/docs/libcurl/opts/GNURLOPT_LOCALPORTRANGE.3 
b/docs/libcurl/opts/GNURLOPT_LOCALPORTRANGE.3
index 4d3b98ff0..20aa156ad 100644
--- a/docs/libcurl/opts/GNURLOPT_LOCALPORTRANGE.3
+++ b/docs/libcurl/opts/GNURLOPT_LOCALPORTRANGE.3
@@ -5,7 +5,7 @@
 .\" *                            | (__| |_| |  _ <| |___
 .\" *                             \___|\___/|_| \_\_____|
 .\" *
-.\" * Copyright (C) 1998 - 2017, Daniel Stenberg, <address@hidden>, et al.
+.\" * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
 .\" *
 .\" * This software is licensed as described in the file COPYING, which
 .\" * you should have received as part of this distribution. The terms
diff --git a/docs/libcurl/opts/GNURLOPT_PROXY_SSLVERSION.3 
b/docs/libcurl/opts/GNURLOPT_PROXY_SSLVERSION.3
index e09364277..7d3b0f5aa 100644
--- a/docs/libcurl/opts/GNURLOPT_PROXY_SSLVERSION.3
+++ b/docs/libcurl/opts/GNURLOPT_PROXY_SSLVERSION.3
@@ -5,7 +5,7 @@
 .\" *                            | (__| |_| |  _ <| |___
 .\" *                             \___|\___/|_| \_\_____|
 .\" *
-.\" * Copyright (C) 1998 - 2016, 2018, Daniel Stenberg, <address@hidden>, et 
al.
+.\" * Copyright (C) 1998 - 2019, 2018, Daniel Stenberg, <address@hidden>, et 
al.
 .\" *
 .\" * This software is licensed as described in the file COPYING, which
 .\" * you should have received as part of this distribution. The terms
diff --git a/docs/libcurl/opts/GNURLOPT_PROXY_TLS13_CIPHERS.3 
b/docs/libcurl/opts/GNURLOPT_PROXY_TLS13_CIPHERS.3
index 6e9918c8e..38b363ab7 100644
--- a/docs/libcurl/opts/GNURLOPT_PROXY_TLS13_CIPHERS.3
+++ b/docs/libcurl/opts/GNURLOPT_PROXY_TLS13_CIPHERS.3
@@ -5,7 +5,7 @@
 .\" *                            | (__| |_| |  _ <| |___
 .\" *                             \___|\___/|_| \_\_____|
 .\" *
-.\" * Copyright (C) 1998 - 2018, Daniel Stenberg, <address@hidden>, et al.
+.\" * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
 .\" *
 .\" * This software is licensed as described in the file COPYING, which
 .\" * you should have received as part of this distribution. The terms
diff --git a/docs/libcurl/opts/GNURLOPT_RANGE.3 
b/docs/libcurl/opts/GNURLOPT_RANGE.3
index dcfbd58a8..4311ad1e9 100644
--- a/docs/libcurl/opts/GNURLOPT_RANGE.3
+++ b/docs/libcurl/opts/GNURLOPT_RANGE.3
@@ -5,7 +5,7 @@
 .\" *                            | (__| |_| |  _ <| |___
 .\" *                             \___|\___/|_| \_\_____|
 .\" *
-.\" * Copyright (C) 1998 - 2016, Daniel Stenberg, <address@hidden>, et al.
+.\" * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
 .\" *
 .\" * This software is licensed as described in the file COPYING, which
 .\" * you should have received as part of this distribution. The terms
diff --git a/docs/libcurl/opts/GNURLOPT_REDIR_PROTOCOLS.3 
b/docs/libcurl/opts/GNURLOPT_REDIR_PROTOCOLS.3
index 3606b9379..b1b4fdda0 100644
--- a/docs/libcurl/opts/GNURLOPT_REDIR_PROTOCOLS.3
+++ b/docs/libcurl/opts/GNURLOPT_REDIR_PROTOCOLS.3
@@ -5,7 +5,7 @@
 .\" *                            | (__| |_| |  _ <| |___
 .\" *                             \___|\___/|_| \_\_____|
 .\" *
-.\" * Copyright (C) 1998 - 2014, Daniel Stenberg, <address@hidden>, et al.
+.\" * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
 .\" *
 .\" * This software is licensed as described in the file COPYING, which
 .\" * you should have received as part of this distribution. The terms
diff --git a/docs/libcurl/opts/GNURLOPT_SEEKDATA.3 
b/docs/libcurl/opts/GNURLOPT_SEEKDATA.3
index b864b78ad..eb73a1a01 100644
--- a/docs/libcurl/opts/GNURLOPT_SEEKDATA.3
+++ b/docs/libcurl/opts/GNURLOPT_SEEKDATA.3
@@ -5,7 +5,7 @@
 .\" *                            | (__| |_| |  _ <| |___
 .\" *                             \___|\___/|_| \_\_____|
 .\" *
-.\" * Copyright (C) 1998 - 2017, Daniel Stenberg, <address@hidden>, et al.
+.\" * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
 .\" *
 .\" * This software is licensed as described in the file COPYING, which
 .\" * you should have received as part of this distribution. The terms
diff --git a/docs/libcurl/opts/GNURLOPT_SSLVERSION.3 
b/docs/libcurl/opts/GNURLOPT_SSLVERSION.3
index 8e1cc7f54..60629ddf6 100644
--- a/docs/libcurl/opts/GNURLOPT_SSLVERSION.3
+++ b/docs/libcurl/opts/GNURLOPT_SSLVERSION.3
@@ -5,7 +5,7 @@
 .\" *                            | (__| |_| |  _ <| |___
 .\" *                             \___|\___/|_| \_\_____|
 .\" *
-.\" * Copyright (C) 1998 - 2015, 2018, Daniel Stenberg, <address@hidden>, et 
al.
+.\" * Copyright (C) 1998 - 2019, 2018, Daniel Stenberg, <address@hidden>, et 
al.
 .\" *
 .\" * This software is licensed as described in the file COPYING, which
 .\" * you should have received as part of this distribution. The terms
diff --git a/docs/libcurl/opts/GNURLOPT_TIMEOUT.3 
b/docs/libcurl/opts/GNURLOPT_TIMEOUT.3
index 307021fae..c6c3bc95e 100644
--- a/docs/libcurl/opts/GNURLOPT_TIMEOUT.3
+++ b/docs/libcurl/opts/GNURLOPT_TIMEOUT.3
@@ -5,7 +5,7 @@
 .\" *                            | (__| |_| |  _ <| |___
 .\" *                             \___|\___/|_| \_\_____|
 .\" *
-.\" * Copyright (C) 1998 - 2017, Daniel Stenberg, <address@hidden>, et al.
+.\" * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
 .\" *
 .\" * This software is licensed as described in the file COPYING, which
 .\" * you should have received as part of this distribution. The terms
@@ -30,9 +30,9 @@ CURLcode curl_easy_setopt(CURL *handle, CURLOPT_TIMEOUT, long 
timeout);
 .SH DESCRIPTION
 Pass a long as parameter containing \fItimeout\fP - the maximum time in
 seconds that you allow the libcurl transfer operation to take. Normally, name
-lookups can take a considerable time and limiting operations to less than a
-few minutes risk aborting perfectly normal operations. This option may cause
-libcurl to use the SIGALRM signal to timeout system calls.
+lookups can take a considerable time and limiting operations risk aborting
+perfectly normal operations. This option may cause libcurl to use the SIGALRM
+signal to timeout system calls.
 
 In unix-like systems, this might cause signals to be used unless
 \fICURLOPT_NOSIGNAL(3)\fP is set.
@@ -40,11 +40,12 @@ In unix-like systems, this might cause signals to be used 
unless
 If both \fICURLOPT_TIMEOUT(3)\fP and \fICURLOPT_TIMEOUT_MS(3)\fP are set, the
 value set last will be used.
 
-Since this puts a hard limit for how long time a request is allowed to take,
-it has limited use in dynamic use cases with varying transfer times. You are
-then advised to explore \fICURLOPT_LOW_SPEED_LIMIT(3)\fP,
-\fICURLOPT_LOW_SPEED_TIME(3)\fP or using \fICURLOPT_PROGRESSFUNCTION(3)\fP to
-implement your own timeout logic.
+Since this option puts a hard limit on how long time a request is allowed to
+take, it has limited use in dynamic use cases with varying transfer times. That
+is especially apparent when using the multi interface, which may queue the
+transfer, and that time is included. You are advised to explore
+\fICURLOPT_LOW_SPEED_LIMIT(3)\fP, \fICURLOPT_LOW_SPEED_TIME(3)\fP or using
+\fICURLOPT_PROGRESSFUNCTION(3)\fP to implement your own timeout logic.
 .SH DEFAULT
 Default timeout is 0 (zero) which means it never times out during transfer.
 .SH PROTOCOLS
diff --git a/docs/libcurl/opts/GNURLOPT_TLS13_CIPHERS.3 
b/docs/libcurl/opts/GNURLOPT_TLS13_CIPHERS.3
index f2666f6a8..7913f2a06 100644
--- a/docs/libcurl/opts/GNURLOPT_TLS13_CIPHERS.3
+++ b/docs/libcurl/opts/GNURLOPT_TLS13_CIPHERS.3
@@ -5,7 +5,7 @@
 .\" *                            | (__| |_| |  _ <| |___
 .\" *                             \___|\___/|_| \_\_____|
 .\" *
-.\" * Copyright (C) 1998 - 2018, Daniel Stenberg, <address@hidden>, et al.
+.\" * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
 .\" *
 .\" * This software is licensed as described in the file COPYING, which
 .\" * you should have received as part of this distribution. The terms
diff --git a/docs/libcurl/opts/GNURLOPT_TRAILERDATA.3 
b/docs/libcurl/opts/GNURLOPT_TRAILERDATA.3
index e6e95cc61..909f051f4 100644
--- a/docs/libcurl/opts/GNURLOPT_TRAILERDATA.3
+++ b/docs/libcurl/opts/GNURLOPT_TRAILERDATA.3
@@ -5,7 +5,7 @@
 .\" *                            | (__| |_| |  _ <| |___
 .\" *                             \___|\___/|_| \_\_____|
 .\" *
-.\" * Copyright (C) 1998 - 2018, Daniel Stenberg, <address@hidden>, et al.
+.\" * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
 .\" *
 .\" * This software is licensed as described in the file COPYING, which
 .\" * you should have received as part of this distribution. The terms
@@ -23,17 +23,17 @@
 .TH GNURLOPT_TRAILERDATA 3 "14 Aug 2018" "libcurl 7.64.0" "curl_easy_setopt 
options"
 .SH NAME:
 CURLOPT_TRAILERDATA \- Custom pointer passed to the trailing headers callback
-.SH SYNOPSIS:
+.SH SYNOPSIS
 #include <curl.h>
 
 CURLcode curl_easy_setopt(CURL *handle, CURLOPT_TRAILERDATA, void *userdata);
 .SH DESCRIPTION:
 Data pointer to be passed to the HTTP trailer callback function.
-.SH DEFAULT:
+.SH DEFAULT
 NULL
-.SH PROTOCOLS:
+.SH PROTOCOLS
 HTTP
-.SH EXAMPLE:
+.SH EXAMPLE
 .nf
 /* Assuming we have a CURL handle in the hndl variable. */
 
@@ -43,7 +43,7 @@ curl_easy_setopt(hndl, CURLOPT_TRAILERDATA, &data);
 .fi
 
 A more complete example can be found in examples/http_trailers.html
-.SH AVAILABILITY:
+.SH AVAILABILITY
 This option was added in curl 7.64.0 and is present if HTTP support is enabled
 .SH "SEE ALSO"
 .BR CURLOPT_TRAILERFUNCTION "(3), "
diff --git a/docs/libcurl/opts/GNURLOPT_TRAILERFUNCTION.3 
b/docs/libcurl/opts/GNURLOPT_TRAILERFUNCTION.3
index 14caf177c..a6c3cdd49 100644
--- a/docs/libcurl/opts/GNURLOPT_TRAILERFUNCTION.3
+++ b/docs/libcurl/opts/GNURLOPT_TRAILERFUNCTION.3
@@ -5,7 +5,7 @@
 .\" *                            | (__| |_| |  _ <| |___
 .\" *                             \___|\___/|_| \_\_____|
 .\" *
-.\" * Copyright (C) 1998 - 2018, Daniel Stenberg, <address@hidden>, et al.
+.\" * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
 .\" *
 .\" * This software is licensed as described in the file COPYING, which
 .\" * you should have received as part of this distribution. The terms
@@ -23,13 +23,13 @@
 .TH GNURLOPT_TRAILERFUNCTION 3 "14 Aug 2018" "libcurl 7.64.0" 
"curl_easy_setopt options"
 .SH NAME:
 CURLOPT_TRAILERFUNCTION \- Set callback for sending trailing headers
-.SH SYNOPSIS:
+.SH SYNOPSIS
 #include <curl.h>
 
 int curl_trailer_callback(struct curl_slist ** list, void *userdata);
 
 CURLcode curl_easy_setopt(CURL *handle, CURLOPT_TRAILERFUNCTION, 
curl_trailer_callback *func);
-.SH DESCRIPTION:
+.SH DESCRIPTION
 Pass a pointer to a callback function.
 
 This callback function will be called once right before sending the final
@@ -56,11 +56,11 @@ trailers or to abort the request.
 
 If you set this option to NULL, then the transfer proceeds as usual
 without any interruptions.
-.SH DEFAULT:
+.SH DEFAULT
 NULL
-.SH PROTOCOLS:
+.SH PROTOCOLS
 HTTP
-.SH EXAMPLE:
+.SH EXAMPLE
 #include <gnurl/curl.h>
 
 static int trailer_cb(struct curl_slist **tr, void *data)
@@ -95,7 +95,7 @@ if(curl) {
 
   curl_slist_free_all(headers);
 }
-.SH AVAILABILITY:
+.SH AVAILABILITY
 This option was added in curl 7.64.0 and is present if HTTP support is enabled
 .SH "SEE ALSO"
 .BR CURLOPT_TRAILERDATA "(3), "
diff --git a/docs/libcurl/opts/Makefile.inc b/docs/libcurl/opts/Makefile.inc
index 58b58dc41..56550f05f 100644
--- a/docs/libcurl/opts/Makefile.inc
+++ b/docs/libcurl/opts/Makefile.inc
@@ -68,6 +68,7 @@ man_MANS =                                      \
   GNURLMOPT_CHUNK_LENGTH_PENALTY_SIZE.3          \
   GNURLMOPT_CONTENT_LENGTH_PENALTY_SIZE.3        \
   GNURLMOPT_MAXCONNECTS.3                        \
+  GNURLMOPT_MAX_CONCURRENT_STREAMS.3             \
   GNURLMOPT_MAX_HOST_CONNECTIONS.3               \
   GNURLMOPT_MAX_PIPELINE_LENGTH.3                \
   GNURLMOPT_MAX_TOTAL_CONNECTIONS.3              \
@@ -97,7 +98,6 @@ man_MANS =                                      \
   GNURLOPT_CHUNK_END_FUNCTION.3                  \
   GNURLOPT_CLOSESOCKETDATA.3                     \
   GNURLOPT_CLOSESOCKETFUNCTION.3                 \
-  GNURLOPT_UPKEEP_INTERVAL_MS.3             \
   GNURLOPT_CONNECTTIMEOUT.3                      \
   GNURLOPT_CONNECTTIMEOUT_MS.3                   \
   GNURLOPT_CONNECT_ONLY.3                        \
@@ -113,8 +113,8 @@ man_MANS =                                      \
   GNURLOPT_COPYPOSTFIELDS.3                      \
   GNURLOPT_CRLF.3                                \
   GNURLOPT_CRLFILE.3                             \
-  GNURLOPT_CUSTOMREQUEST.3                       \
   GNURLOPT_CURLU.3                               \
+  GNURLOPT_CUSTOMREQUEST.3                       \
   GNURLOPT_DEBUGDATA.3                           \
   GNURLOPT_DEBUGFUNCTION.3                       \
   GNURLOPT_DEFAULT_PROTOCOL.3                    \
@@ -168,8 +168,6 @@ man_MANS =                                      \
   GNURLOPT_HTTP_TRANSFER_DECODING.3              \
   GNURLOPT_HTTP_VERSION.3                        \
   GNURLOPT_IGNORE_CONTENT_LENGTH.3               \
-  GNURLOPT_TRAILERDATA.3                         \
-  GNURLOPT_TRAILERFUNCTION.3                     \
   GNURLOPT_INFILESIZE.3                          \
   GNURLOPT_INFILESIZE_LARGE.3                    \
   GNURLOPT_INTERFACE.3                           \
@@ -332,10 +330,13 @@ man_MANS =                                      \
   GNURLOPT_TLSAUTH_PASSWORD.3                    \
   GNURLOPT_TLSAUTH_TYPE.3                        \
   GNURLOPT_TLSAUTH_USERNAME.3                    \
+  GNURLOPT_TRAILERDATA.3                         \
+  GNURLOPT_TRAILERFUNCTION.3                     \
   GNURLOPT_TRANSFERTEXT.3                        \
   GNURLOPT_TRANSFER_ENCODING.3                   \
   GNURLOPT_UNIX_SOCKET_PATH.3                    \
   GNURLOPT_UNRESTRICTED_AUTH.3                   \
+  GNURLOPT_UPKEEP_INTERVAL_MS.3                  \
   GNURLOPT_UPLOAD.3                              \
   GNURLOPT_UPLOAD_BUFFERSIZE.3                   \
   GNURLOPT_URL.3                                 \
diff --git a/docs/libcurl/symbols-in-versions b/docs/libcurl/symbols-in-versions
index 9daad949f..bf23b4488 100644
--- a/docs/libcurl/symbols-in-versions
+++ b/docs/libcurl/symbols-in-versions
@@ -319,6 +319,7 @@ CURLMOPT_MAXCONNECTS            7.16.3
 CURLMOPT_MAX_HOST_CONNECTIONS   7.30.0
 CURLMOPT_MAX_PIPELINE_LENGTH    7.30.0
 CURLMOPT_MAX_TOTAL_CONNECTIONS  7.30.0
+CURLMOPT_MAX_CONCURRENT_STREAMS  7.67.0
 CURLMOPT_PIPELINING             7.16.0
 CURLMOPT_PIPELINING_SERVER_BL   7.30.0
 CURLMOPT_PIPELINING_SITE_BL     7.30.0
@@ -779,6 +780,7 @@ CURLU_DISALLOW_USER             7.62.0
 CURLU_GUESS_SCHEME              7.62.0
 CURLU_NON_SUPPORT_SCHEME        7.62.0
 CURLU_NO_DEFAULT_PORT           7.62.0
+CURLU_NO_AUTHORITY              7.67.0
 CURLU_PATH_AS_IS                7.62.0
 CURLU_URLDECODE                 7.62.0
 CURLU_URLENCODE                 7.62.0
@@ -925,6 +927,7 @@ CURL_VERSION_BROTLI             7.57.0
 CURL_VERSION_CONV               7.15.4
 CURL_VERSION_CURLDEBUG          7.19.6
 CURL_VERSION_DEBUG              7.10.6
+CURL_VERSION_ESNI               7.67.0
 CURL_VERSION_GSSAPI             7.38.0
 CURL_VERSION_GSSNEGOTIATE       7.10.6        7.38.0
 CURL_VERSION_HTTP2              7.33.0
diff --git a/include/gnurl/curl.h b/include/gnurl/curl.h
index ff0c77496..dcbe8995c 100644
--- a/include/gnurl/curl.h
+++ b/include/gnurl/curl.h
@@ -2800,6 +2800,8 @@ typedef struct {
 #define CURL_VERSION_ALTSVC       (1<<24) /* Alt-Svc handling built-in */
 #define CURL_VERSION_HTTP3        (1<<25) /* HTTP3 support built-in */
 
+#define CURL_VERSION_ESNI         (1<<26) /* ESNI support */
+
  /*
  * NAME curl_version_info()
  *
diff --git a/include/gnurl/curlver.h b/include/gnurl/curlver.h
index 54fa2cb94..31fc72d12 100644
--- a/include/gnurl/curlver.h
+++ b/include/gnurl/curlver.h
@@ -30,12 +30,12 @@
 
 /* This is the version number of the libcurl package from which this header
    file origins: */
-#define LIBCURL_VERSION "7.66.0-DEV"
+#define LIBCURL_VERSION "7.67.0-DEV"
 
 /* The numeric version number is also available "in parts" by using these
    defines: */
 #define LIBCURL_VERSION_MAJOR 7
-#define LIBCURL_VERSION_MINOR 66
+#define LIBCURL_VERSION_MINOR 67
 #define LIBCURL_VERSION_PATCH 0
 
 /* This is the numeric version of the libcurl version number, meant for easier
@@ -57,7 +57,7 @@
    CURL_VERSION_BITS() macro since curl's own configure script greps for it
    and needs it to contain the full number.
 */
-#define LIBCURL_VERSION_NUM 0x074200
+#define LIBCURL_VERSION_NUM 0x074300
 
 /*
  * This is the date and time when the full source package was created. The
diff --git a/include/gnurl/multi.h b/include/gnurl/multi.h
index f10932244..fce68f4ff 100644
--- a/include/gnurl/multi.h
+++ b/include/gnurl/multi.h
@@ -396,6 +396,9 @@ typedef enum {
   /* This is the argument passed to the server push callback */
   CINIT(PUSHDATA, OBJECTPOINT, 15),
 
+  /* maximum number of concurrent streams to support on a connection */
+  CINIT(MAX_CONCURRENT_STREAMS, LONG, 16),
+
   CURLMOPT_LASTENTRY /* the last unused */
 } CURLMoption;
 
@@ -448,6 +451,9 @@ typedef int (*curl_push_callback)(CURL *parent,
                                   struct curl_pushheaders *headers,
                                   void *userp);
 
+/* value for MAXIMUM CONCURRENT STREAMS upper limit */
+#define INITIAL_MAX_CONCURRENT_STREAMS ((1U << 31) - 1)
+
 #ifdef __cplusplus
 } /* end of extern "C" */
 #endif
diff --git a/include/gnurl/urlapi.h b/include/gnurl/urlapi.h
index 0f2f152f1..f2d06770d 100644
--- a/include/gnurl/urlapi.h
+++ b/include/gnurl/urlapi.h
@@ -77,6 +77,8 @@ typedef enum {
 #define CURLU_URLENCODE (1<<7)          /* URL encode on set */
 #define CURLU_APPENDQUERY (1<<8)        /* append a form style part */
 #define CURLU_GUESS_SCHEME (1<<9)       /* legacy curl-style guessing */
+#define CURLU_NO_AUTHORITY (1<<10)      /* Allow empty authority when the
+                                           scheme is unknown. */
 
 typedef struct Curl_URL CURLU;
 
diff --git a/lib/Makefile.inc b/lib/Makefile.inc
index 3e3a385c5..72ef428ee 100644
--- a/lib/Makefile.inc
+++ b/lib/Makefile.inc
@@ -61,7 +61,7 @@ LIB_CFILES = file.c timeval.c base64.c hostip.c progress.c 
formdata.c   \
   curl_multibyte.c hostcheck.c conncache.c dotdot.c                     \
   x509asn1.c http2.c smb.c curl_endian.c curl_des.c system_win32.c      \
   mime.c sha256.c setopt.c curl_path.c curl_ctype.c curl_range.c psl.c  \
-  doh.c urlapi.c curl_get_line.c altsvc.c
+  doh.c urlapi.c curl_get_line.c altsvc.c socketpair.c
 
 LIB_HFILES = arpa_telnet.h netrc.h file.h timeval.h hostip.h progress.h \
   formdata.h cookie.h http.h sendf.h ftp.h url.h dict.h if2ip.h         \
@@ -82,7 +82,7 @@ LIB_HFILES = arpa_telnet.h netrc.h file.h timeval.h hostip.h 
progress.h \
   x509asn1.h http2.h sigpipe.h smb.h curl_endian.h curl_des.h           \
   curl_printf.h system_win32.h rand.h mime.h curl_sha256.h setopt.h     \
   curl_path.h curl_ctype.h curl_range.h psl.h doh.h urlapi-int.h        \
-  curl_get_line.h altsvc.h quic.h
+  curl_get_line.h altsvc.h quic.h socketpair.h
 
 LIB_RCFILES = libcurl.rc
 
diff --git a/lib/Makefile.netware b/lib/Makefile.netware
index db946968f..adc1ce6c3 100644
--- a/lib/Makefile.netware
+++ b/lib/Makefile.netware
@@ -640,8 +640,6 @@ ifdef WITH_SSL
        @echo $(DL)#define HAVE_OPENSSL_ERR_H 1$(DL) >> $@
        @echo $(DL)#define HAVE_OPENSSL_CRYPTO_H 1$(DL) >> $@
        @echo $(DL)#define HAVE_OPENSSL_ENGINE_H 1$(DL) >> $@
-       @echo $(DL)#define HAVE_LIBSSL 1$(DL) >> $@
-       @echo $(DL)#define HAVE_LIBCRYPTO 1$(DL) >> $@
        @echo $(DL)#define OPENSSL_NO_KRB5 1$(DL) >> $@
 ifdef WITH_SRP
        @echo $(DL)#define USE_TLS_SRP 1$(DL) >> $@
diff --git a/lib/altsvc.c b/lib/altsvc.c
index c803773cd..c773b7bdc 100644
--- a/lib/altsvc.c
+++ b/lib/altsvc.c
@@ -54,8 +54,8 @@ static enum alpnid alpn2alpnid(char *name)
     return ALPN_h1;
   if(strcasecompare(name, "h2"))
     return ALPN_h2;
-#if (defined(USE_QUICHE) || defined(USE_NGHTTP2)) && !defined(UNITTESTS)
-  if(strcasecompare(name, "h3-22"))
+#if (defined(USE_QUICHE) || defined(USE_NGTCP2)) && !defined(UNITTESTS)
+  if(strcasecompare(name, "h3-23"))
     return ALPN_h3;
 #else
   if(strcasecompare(name, "h3"))
@@ -73,8 +73,8 @@ const char *Curl_alpnid2str(enum alpnid id)
   case ALPN_h2:
     return "h2";
   case ALPN_h3:
-#if (defined(USE_QUICHE) || defined(USE_NGHTTP2)) && !defined(UNITTESTS)
-    return "h3-22";
+#if (defined(USE_QUICHE) || defined(USE_NGTCP2)) && !defined(UNITTESTS)
+    return "h3-23";
 #else
     return "h3";
 #endif
@@ -442,6 +442,7 @@ CURLcode Curl_altsvc_parse(struct Curl_easy *data,
       char option[32];
       unsigned long num;
       char *end_ptr;
+      bool quoted = FALSE;
       semip++; /* pass the semicolon */
       result = getalnum(&semip, option, sizeof(option));
       if(result)
@@ -451,12 +452,21 @@ CURLcode Curl_altsvc_parse(struct Curl_easy *data,
       if(*semip != '=')
         continue;
       semip++;
+      while(*semip && ISBLANK(*semip))
+        semip++;
+      if(*semip == '\"') {
+        /* quoted value */
+        semip++;
+        quoted = TRUE;
+      }
       num = strtoul(semip, &end_ptr, 10);
-      if(num < ULONG_MAX) {
+      if((end_ptr != semip) && num && (num < ULONG_MAX)) {
         if(strcasecompare("ma", option))
           maxage = num;
         else if(strcasecompare("persist", option) && (num == 1))
           persist = TRUE;
+        if(quoted && (*end_ptr == '\"'))
+          end_ptr++;
       }
       semip = end_ptr;
     }
diff --git a/lib/asyn-thread.c b/lib/asyn-thread.c
index 24da74885..8c552baa9 100755
--- a/lib/asyn-thread.c
+++ b/lib/asyn-thread.c
@@ -21,6 +21,7 @@
  ***************************************************************************/
 
 #include "curl_setup.h"
+#include "socketpair.h"
 
 /***********************************************************************
  * Only for threaded name resolves builds
@@ -74,6 +75,7 @@
 #include "inet_ntop.h"
 #include "curl_threads.h"
 #include "connect.h"
+#include "socketpair.h"
 /* The last 3 #include files should be in this order */
 #include "curl_printf.h"
 #include "curl_memory.h"
@@ -163,7 +165,7 @@ struct thread_sync_data {
   char *hostname;        /* hostname to resolve, Curl_async.hostname
                             duplicate */
   int port;
-#ifdef HAVE_SOCKETPAIR
+#ifdef USE_SOCKETPAIR
   struct connectdata *conn;
   curl_socket_t sock_pair[2]; /* socket pair */
 #endif
@@ -201,7 +203,7 @@ void destroy_thread_sync_data(struct thread_sync_data * tsd)
   if(tsd->res)
     Curl_freeaddrinfo(tsd->res);
 
-#ifdef HAVE_SOCKETPAIR
+#ifdef USE_SOCKETPAIR
   /*
    * close one end of the socket pair (may be done in resolver thread);
    * the other end (for reading) is always closed in the parent thread.
@@ -243,9 +245,9 @@ int init_thread_sync_data(struct thread_data * td,
 
   Curl_mutex_init(tsd->mtx);
 
-#ifdef HAVE_SOCKETPAIR
-  /* create socket pair */
-  if(socketpair(AF_LOCAL, SOCK_STREAM, 0, &tsd->sock_pair[0]) < 0) {
+#ifdef USE_SOCKETPAIR
+  /* create socket pair, avoid AF_LOCAL since it doesn't build on Solaris */
+  if(Curl_socketpair(AF_UNIX, SOCK_STREAM, 0, &tsd->sock_pair[0]) < 0) {
     tsd->sock_pair[0] = CURL_SOCKET_BAD;
     tsd->sock_pair[1] = CURL_SOCKET_BAD;
     goto err_exit;
@@ -297,7 +299,7 @@ static unsigned int CURL_STDCALL getaddrinfo_thread(void 
*arg)
   struct thread_data *td = tsd->td;
   char service[12];
   int rc;
-#ifdef HAVE_SOCKETPAIR
+#ifdef USE_SOCKETPAIR
   char buf[1];
 #endif
 
@@ -322,11 +324,11 @@ static unsigned int CURL_STDCALL getaddrinfo_thread(void 
*arg)
     free(td);
   }
   else {
-#ifdef HAVE_SOCKETPAIR
+#ifdef USE_SOCKETPAIR
     if(tsd->sock_pair[1] != CURL_SOCKET_BAD) {
       /* DNS has been resolved, signal client task */
       buf[0] = 1;
-      if(write(tsd->sock_pair[1],  buf, sizeof(buf)) < 0) {
+      if(swrite(tsd->sock_pair[1],  buf, sizeof(buf)) < 0) {
         /* update sock_erro to errno */
         tsd->sock_error = SOCKERRNO;
       }
@@ -382,7 +384,7 @@ static void destroy_async_data(struct Curl_async *async)
   if(async->os_specific) {
     struct thread_data *td = (struct thread_data*) async->os_specific;
     int done;
-#ifdef HAVE_SOCKETPAIR
+#ifdef USE_SOCKETPAIR
     curl_socket_t sock_rd = td->tsd.sock_pair[0];
     struct connectdata *conn = td->tsd.conn;
 #endif
@@ -407,7 +409,7 @@ static void destroy_async_data(struct Curl_async *async)
 
       free(async->os_specific);
     }
-#ifdef HAVE_SOCKETPAIR
+#ifdef USE_SOCKETPAIR
     /*
      * ensure CURLMOPT_SOCKETFUNCTION fires CURL_POLL_REMOVE
      * before the FD is invalidated to avoid EBADF on EPOLL_CTL_DEL
@@ -647,13 +649,13 @@ int Curl_resolver_getsock(struct connectdata *conn,
   timediff_t ms;
   struct Curl_easy *data = conn->data;
   struct resdata *reslv = (struct resdata *)data->state.resolver;
-#ifdef HAVE_SOCKETPAIR
+#ifdef USE_SOCKETPAIR
   struct thread_data *td = (struct thread_data*)conn->async.os_specific;
 #else
   (void)socks;
 #endif
 
-#ifdef HAVE_SOCKETPAIR
+#ifdef USE_SOCKETPAIR
   if(td) {
     /* return read fd to client for polling the DNS resolution status */
     socks[0] = td->tsd.sock_pair[0];
@@ -673,7 +675,7 @@ int Curl_resolver_getsock(struct connectdata *conn,
     else
       milli = 200;
     Curl_expire(data, milli, EXPIRE_ASYNC_NAME);
-#ifdef HAVE_SOCKETPAIR
+#ifdef USE_SOCKETPAIR
   }
 #endif
 
diff --git a/lib/checksrc.pl b/lib/checksrc.pl
index 965f0bab1..b2cfa8355 100755
--- a/lib/checksrc.pl
+++ b/lib/checksrc.pl
@@ -176,7 +176,7 @@ sub checkwarn {
 
 $file = shift @ARGV;
 
-while(1) {
+while(defined $file) {
 
     if($file =~ /-D(.*)/) {
         $dir = $1;
diff --git a/lib/config-amigaos.h b/lib/config-amigaos.h
index 31cfc3afc..12a87cf29 100644
--- a/lib/config-amigaos.h
+++ b/lib/config-amigaos.h
@@ -7,7 +7,7 @@
  *                            | (__| |_| |  _ <| |___
  *                             \___|\___/|_| \_\_____|
  *
- * Copyright (C) 1998 - 2016, Daniel Stenberg, <address@hidden>, et al.
+ * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
  *
  * This software is licensed as described in the file COPYING, which
  * you should have received as part of this distribution. The terms
@@ -36,8 +36,6 @@
 #define HAVE_INTTYPES_H 1
 #define HAVE_IOCTLSOCKET_CAMEL 1
 #define HAVE_IOCTLSOCKET_CAMEL_FIONBIO 1
-#define HAVE_LIBCRYPTO 1
-#define HAVE_LIBSSL 1
 #define HAVE_LIBZ 1
 #define HAVE_LONGLONG 1
 #define HAVE_MALLOC_H 1
diff --git a/lib/config-os400.h b/lib/config-os400.h
index d14bd3391..a302828e2 100644
--- a/lib/config-os400.h
+++ b/lib/config-os400.h
@@ -160,9 +160,6 @@
 /* Define if you have the <krb.h> header file. */
 #undef HAVE_KRB_H
 
-/* Define if you have the `crypto' library (-lcrypto). */
-#undef HAVE_LIBCRYPTO
-
 /* Define if you have the `nsl' library (-lnsl). */
 #undef HAVE_LIBNSL
 
@@ -175,9 +172,6 @@
 /* Define if you have the `socket' library (-lsocket). */
 #undef HAVE_LIBSOCKET
 
-/* Define if you have the `ssl' library (-lssl). */
-#undef HAVE_LIBSSL
-
 /* Define if you have GSS API. */
 #define HAVE_GSSAPI
 
diff --git a/lib/config-plan9.h b/lib/config-plan9.h
index 70833a51d..64bfbdea0 100644
--- a/lib/config-plan9.h
+++ b/lib/config-plan9.h
@@ -126,7 +126,6 @@
 #define HAVE_INTTYPES_H 1
 #define HAVE_IOCTL 1
 #define HAVE_LIBGEN_H 1
-#define HAVE_LIBSSL 1
 #define HAVE_LIBZ 1
 #define HAVE_LL 1
 #define HAVE_LOCALE_H 1
diff --git a/lib/config-riscos.h b/lib/config-riscos.h
index 0379524fb..4af94981c 100644
--- a/lib/config-riscos.h
+++ b/lib/config-riscos.h
@@ -7,7 +7,7 @@
  *                            | (__| |_| |  _ <| |___
  *                             \___|\___/|_| \_\_____|
  *
- * Copyright (C) 1998 - 2013, Daniel Stenberg, <address@hidden>, et al.
+ * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
  *
  * This software is licensed as described in the file COPYING, which
  * you should have received as part of this distribution. The terms
@@ -164,9 +164,6 @@
 /* Define if you have the <krb.h> header file. */
 #undef HAVE_KRB_H
 
-/* Define if you have the `crypto' library (-lcrypto). */
-#undef HAVE_LIBCRYPTO
-
 /* Define if you have the `nsl' library (-lnsl). */
 #undef HAVE_LIBNSL
 
@@ -179,9 +176,6 @@
 /* Define if you have the `socket' library (-lsocket). */
 #undef HAVE_LIBSOCKET
 
-/* Define if you have the `ssl' library (-lssl). */
-#undef HAVE_LIBSSL
-
 /* Define if you have the `ucb' library (-lucb). */
 #undef HAVE_LIBUCB
 
diff --git a/lib/config-symbian.h b/lib/config-symbian.h
index b7b93c6f4..cb2e96d5d 100644
--- a/lib/config-symbian.h
+++ b/lib/config-symbian.h
@@ -315,9 +315,6 @@
 /* Define to 1 if you have the <libssh2.h> header file. */
 /*#define HAVE_LIBSSH2_H 1*/
 
-/* Define to 1 if you have the `ssl' library (-lssl). */
-/*#define HAVE_LIBSSL 1*/
-
 /* if your compiler supports LL */
 #define HAVE_LL 1
 
diff --git a/lib/config-tpf.h b/lib/config-tpf.h
index 778d9833f..f0c095bb0 100644
--- a/lib/config-tpf.h
+++ b/lib/config-tpf.h
@@ -7,7 +7,7 @@
  *                            | (__| |_| |  _ <| |___
  *                             \___|\___/|_| \_\_____|
  *
- * Copyright (C) 1998 - 2015, Daniel Stenberg, <address@hidden>, et al.
+ * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
  *
  * This software is licensed as described in the file COPYING, which
  * you should have received as part of this distribution. The terms
@@ -277,10 +277,6 @@
 /* Define to 1 if you have the `socket' library (-lsocket). */
 /* #undef HAVE_LIBSOCKET */
 
-/* Define to 1 if you have the `ssl' library (-lssl). */
-/* #undef HAVE_LIBSSL */
-#define HAVE_LIBSSL 1
-
 /* if zlib is available */
 /* #undef HAVE_LIBZ */
 
diff --git a/lib/config-vxworks.h b/lib/config-vxworks.h
index 89af3525b..d352578e3 100644
--- a/lib/config-vxworks.h
+++ b/lib/config-vxworks.h
@@ -375,9 +375,6 @@
 /* Define to 1 if you have the `libssh2_version' function. */
 /* #undef HAVE_LIBSSH2_VERSION */
 
-/* Define to 1 if you have the `ssl' library (-lssl). */
-#define HAVE_LIBSSL 1
-
 /* if zlib is available */
 #define HAVE_LIBZ 1
 
diff --git a/lib/conncache.c b/lib/conncache.c
index 028f4aed3..270352255 100644
--- a/lib/conncache.c
+++ b/lib/conncache.c
@@ -143,10 +143,8 @@ int Curl_conncache_init(struct conncache *connc, int size)
 
   rc = Curl_hash_init(&connc->hash, size, Curl_hash_str,
                       Curl_str_key_compare, free_bundle_hash_entry);
-  if(rc) {
-    Curl_close(connc->closure_handle);
-    connc->closure_handle = NULL;
-  }
+  if(rc)
+    Curl_close(&connc->closure_handle);
   else
     connc->closure_handle->state.conn_cache = connc;
 
@@ -595,7 +593,7 @@ void Curl_conncache_close_all_connections(struct conncache 
*connc)
 
     Curl_hostcache_clean(connc->closure_handle,
                          connc->closure_handle->dns.hostcache);
-    Curl_close(connc->closure_handle);
+    Curl_close(&connc->closure_handle);
     sigpipe_restore(&pipe_st);
   }
 }
diff --git a/lib/connect.c b/lib/connect.c
index 77196250d..3b88a5962 100644
--- a/lib/connect.c
+++ b/lib/connect.c
@@ -665,7 +665,7 @@ bool Curl_addr2string(struct sockaddr *sa, curl_socklen_t 
salen,
 #endif
 #if defined(HAVE_SYS_UN_H) && defined(AF_UNIX)
     case AF_UNIX:
-      if(salen > sizeof(sa_family_t)) {
+      if(salen > (curl_socklen_t)sizeof(sa_family_t)) {
         su = (struct sockaddr_un*)sa;
         msnprintf(addr, MAX_IPADR_LEN, "%s", su->sun_path);
       }
@@ -976,6 +976,14 @@ CURLcode Curl_is_connected(struct connectdata *conn,
     failf(data, "Failed to connect to %s port %ld: %s",
           hostname, conn->port,
           Curl_strerror(error, buffer, sizeof(buffer)));
+
+#ifdef WSAETIMEDOUT
+    if(WSAETIMEDOUT == data->state.os_errno)
+      result = CURLE_OPERATION_TIMEDOUT;
+#elif defined(ETIMEDOUT)
+    if(ETIMEDOUT == data->state.os_errno)
+      result = CURLE_OPERATION_TIMEDOUT;
+#endif
   }
 
   return result;
@@ -1508,6 +1516,11 @@ CURLcode Curl_socket(struct connectdata *conn,
     /* no socket, no connection */
     return CURLE_COULDNT_CONNECT;
 
+  if(conn->transport == TRNSPRT_QUIC) {
+    /* QUIC sockets need to be nonblocking */
+    (void)curlx_nonblock(*sockfd, TRUE);
+  }
+
 #if defined(ENABLE_IPV6) && defined(HAVE_SOCKADDR_IN6_SIN6_SCOPE_ID)
   if(conn->scope_id && (addr->family == AF_INET6)) {
     struct sockaddr_in6 * const sa6 = (void *)&addr->sa_addr;
diff --git a/lib/cookie.c b/lib/cookie.c
index 53ca40237..f56bd85a9 100644
--- a/lib/cookie.c
+++ b/lib/cookie.c
@@ -1090,6 +1090,8 @@ Curl_cookie_add(struct Curl_easy *data,
  *
  * If 'newsession' is TRUE, discard all "session cookies" on read from file.
  *
+ * Note that 'data' might be called as NULL pointer.
+ *
  * Returns NULL on out of memory. Invalid cookies are ignored.
  ****************************************************************************/
 struct CookieInfo *Curl_cookie_init(struct Curl_easy *data,
@@ -1160,6 +1162,8 @@ struct CookieInfo *Curl_cookie_init(struct Curl_easy 
*data,
   }
 
   c->running = TRUE;          /* now, we're running */
+  if(data)
+    data->state.cookie_engine = TRUE;
 
   return c;
 
@@ -1528,28 +1532,28 @@ static int cookie_output(struct CookieInfo *c, const 
char *dumphere)
 
   if(c->numcookies) {
     unsigned int i;
-    unsigned int j;
+    size_t nvalid = 0;
     struct Cookie **array;
 
-    array = malloc(sizeof(struct Cookie *) * c->numcookies);
+    array = calloc(1, sizeof(struct Cookie *) * c->numcookies);
     if(!array) {
       if(!use_stdout)
         fclose(out);
       return 1;
     }
 
-    j = 0;
+    /* only sort the cookies with a domain property */
     for(i = 0; i < COOKIE_HASH_SIZE; i++) {
       for(co = c->cookies[i]; co; co = co->next) {
         if(!co->domain)
           continue;
-        array[j++] = co;
+        array[nvalid++] = co;
       }
     }
 
-    qsort(array, c->numcookies, sizeof(struct Cookie *), cookie_sort_ct);
+    qsort(array, nvalid, sizeof(struct Cookie *), cookie_sort_ct);
 
-    for(i = 0; i < j; i++) {
+    for(i = 0; i < nvalid; i++) {
       char *format_ptr = get_netscape_format(array[i]);
       if(format_ptr == NULL) {
         fprintf(out, "#\n# Fatal libcurl error\n");
@@ -1613,7 +1617,7 @@ struct curl_slist *Curl_cookie_list(struct Curl_easy 
*data)
   return list;
 }
 
-void Curl_flush_cookies(struct Curl_easy *data, int cleanup)
+void Curl_flush_cookies(struct Curl_easy *data, bool cleanup)
 {
   if(data->set.str[STRING_COOKIEJAR]) {
     if(data->change.cookielist) {
@@ -1642,6 +1646,7 @@ void Curl_flush_cookies(struct Curl_easy *data, int 
cleanup)
 
   if(cleanup && (!data->share || (data->cookies != data->share->cookies))) {
     Curl_cookie_cleanup(data->cookies);
+    data->cookies = NULL;
   }
   Curl_share_unlock(data, CURL_LOCK_DATA_COOKIE);
 }
diff --git a/lib/cookie.h b/lib/cookie.h
index 6e314f9fa..2ac7a518e 100644
--- a/lib/cookie.h
+++ b/lib/cookie.h
@@ -109,7 +109,7 @@ void Curl_cookie_clearsess(struct CookieInfo *cookies);
 #define Curl_cookie_cleanup(x) Curl_nop_stmt
 #define Curl_flush_cookies(x,y) Curl_nop_stmt
 #else
-void Curl_flush_cookies(struct Curl_easy *data, int cleanup);
+void Curl_flush_cookies(struct Curl_easy *data, bool cleanup);
 void Curl_cookie_cleanup(struct CookieInfo *);
 struct CookieInfo *Curl_cookie_init(struct Curl_easy *data,
                                     const char *, struct CookieInfo *, bool);
diff --git a/lib/curl_config.h.cmake b/lib/curl_config.h.cmake
index 5458cbaca..e0793a7ee 100644
--- a/lib/curl_config.h.cmake
+++ b/lib/curl_config.h.cmake
@@ -407,9 +407,6 @@
 /* Define to 1 if you have the <libssh2.h> header file. */
 #cmakedefine HAVE_LIBSSH2_H 1
 
-/* Define to 1 if you have the `ssl' library (-lssl). */
-#cmakedefine HAVE_LIBSSL 1
-
 /* if zlib is available */
 #cmakedefine HAVE_LIBZ 1
 
diff --git a/lib/doh.c b/lib/doh.c
index 6d1f3303b..d1795789e 100644
--- a/lib/doh.c
+++ b/lib/doh.c
@@ -74,17 +74,26 @@ static const char *doh_strerror(DOHcode code)
 #define UNITTEST static
 #endif
 
+/* @unittest 1655
+ */
 UNITTEST DOHcode doh_encode(const char *host,
                             DNStype dnstype,
                             unsigned char *dnsp, /* buffer */
                             size_t len,  /* buffer size */
                             size_t *olen) /* output length */
 {
-  size_t hostlen = strlen(host);
+  const size_t hostlen = strlen(host);
   unsigned char *orig = dnsp;
   const char *hostp = host;
 
-  if(len < (12 + hostlen + 4))
+  /* The expected output length does not depend on the number of dots within
+   * the host name. It will always be two more than the length of the host
+   * name, one for the size and one trailing null. In case there are dots,
+   * each dot adds one size but removes the need to store the dot, net zero.
+   */
+  const size_t expected_len = 12 + ( 1 + hostlen + 1) + 4;
+
+  if(len < expected_len)
     return DOH_TOO_SMALL_BUFFER;
 
   *dnsp++ = 0; /* 16 bit id */
@@ -126,12 +135,18 @@ UNITTEST DOHcode doh_encode(const char *host,
     }
   } while(1);
 
-  *dnsp++ = '\0'; /* upper 8 bit TYPE */
-  *dnsp++ = (unsigned char)dnstype;
+  /* There are assigned TYPE codes beyond 255: use range [1..65535]  */
+  *dnsp++ = (unsigned char)(255 & (dnstype>>8)); /* upper 8 bit TYPE */
+  *dnsp++ = (unsigned char)(255 & dnstype);      /* lower 8 bit TYPE */
+
   *dnsp++ = '\0'; /* upper 8 bit CLASS */
   *dnsp++ = DNS_CLASS_IN; /* IN - "the Internet" */
 
   *olen = dnsp - orig;
+
+  /* verify that our assumption of length is valid, since
+   * this has lead to buffer overflows in this function */
+  DEBUGASSERT(*olen == expected_len);
   return DOH_OK;
 }
 
@@ -225,7 +240,10 @@ static CURLcode dohprobe(struct Curl_easy *data,
   }
 
   timeout_ms = Curl_timeleft(data, NULL, TRUE);
-
+  if(timeout_ms <= 0) {
+    result = CURLE_OPERATION_TIMEDOUT;
+    goto error;
+  }
   /* Curl_open() is the internal version of curl_easy_init() */
   result = Curl_open(&doh);
   if(!result) {
@@ -246,6 +264,9 @@ static CURLcode dohprobe(struct Curl_easy *data,
 #ifndef CURLDEBUG
     /* enforce HTTPS if not debug */
     ERROR_CHECK_SETOPT(CURLOPT_PROTOCOLS, CURLPROTO_HTTPS);
+#else
+    /* in debug mode, also allow http */
+    ERROR_CHECK_SETOPT(CURLOPT_PROTOCOLS, CURLPROTO_HTTP|CURLPROTO_HTTPS);
 #endif
     ERROR_CHECK_SETOPT(CURLOPT_TIMEOUT_MS, (long)timeout_ms);
     if(data->set.verbose)
@@ -325,7 +346,7 @@ static CURLcode dohprobe(struct Curl_easy *data,
 
   error:
   free(nurl);
-  Curl_close(doh);
+  Curl_close(&doh);
   return result;
 }
 
@@ -381,10 +402,8 @@ Curl_addrinfo *Curl_doh(struct connectdata *conn,
   error:
   curl_slist_free_all(data->req.doh.headers);
   data->req.doh.headers = NULL;
-  curl_easy_cleanup(data->req.doh.probe[0].easy);
-  data->req.doh.probe[0].easy = NULL;
-  curl_easy_cleanup(data->req.doh.probe[1].easy);
-  data->req.doh.probe[1].easy = NULL;
+  Curl_close(&data->req.doh.probe[0].easy);
+  Curl_close(&data->req.doh.probe[1].easy);
   return NULL;
 }
 
@@ -419,8 +438,14 @@ static unsigned short get16bit(unsigned char *doh, int 
index)
 
 static unsigned int get32bit(unsigned char *doh, int index)
 {
-  return (doh[index] << 24) | (doh[index + 1] << 16) |
-    (doh[index + 2] << 8) | doh[index + 3];
+   /* make clang and gcc optimize this to bswap by incrementing
+      the pointer first. */
+   doh += index;
+
+   /* avoid undefined behaviour by casting to unsigned before shifting
+      24 bits, possibly into the sign bit. codegen is same, but
+      ub sanitizer won't be upset */
+  return ( (unsigned)doh[0] << 24) | (doh[1] << 16) |(doh[2] << 8) | doh[3];
 }
 
 static DOHcode store_a(unsigned char *doh, int index, struct dohentry *d)
@@ -898,17 +923,16 @@ CURLcode Curl_doh_is_resolved(struct connectdata *conn,
     struct dohentry de;
     /* remove DOH handles from multi handle and close them */
     curl_multi_remove_handle(data->multi, data->req.doh.probe[0].easy);
-    Curl_close(data->req.doh.probe[0].easy);
+    Curl_close(&data->req.doh.probe[0].easy);
     curl_multi_remove_handle(data->multi, data->req.doh.probe[1].easy);
-    Curl_close(data->req.doh.probe[1].easy);
-
+    Curl_close(&data->req.doh.probe[1].easy);
     /* parse the responses, create the struct and return it! */
     init_dohentry(&de);
     rc = doh_decode(data->req.doh.probe[0].serverdoh.memory,
                     data->req.doh.probe[0].serverdoh.size,
                     data->req.doh.probe[0].dnstype,
                     &de);
-    free(data->req.doh.probe[0].serverdoh.memory);
+    Curl_safefree(data->req.doh.probe[0].serverdoh.memory);
     if(rc) {
       infof(data, "DOH: %s type %s for %s\n", doh_strerror(rc),
             type2name(data->req.doh.probe[0].dnstype),
@@ -918,7 +942,7 @@ CURLcode Curl_doh_is_resolved(struct connectdata *conn,
                      data->req.doh.probe[1].serverdoh.size,
                      data->req.doh.probe[1].dnstype,
                      &de);
-    free(data->req.doh.probe[1].serverdoh.memory);
+    Curl_safefree(data->req.doh.probe[1].serverdoh.memory);
     if(rc2) {
       infof(data, "DOH: %s type %s for %s\n", doh_strerror(rc2),
             type2name(data->req.doh.probe[1].dnstype),
diff --git a/lib/easy.c b/lib/easy.c
index cc670dcb3..9ff93fa89 100644
--- a/lib/easy.c
+++ b/lib/easy.c
@@ -731,7 +731,7 @@ void curl_easy_cleanup(struct Curl_easy *data)
     return;
 
   sigpipe_ignore(data, &pipe_st);
-  Curl_close(data);
+  Curl_close(&data);
   sigpipe_restore(&pipe_st);
 }
 
@@ -1020,9 +1020,8 @@ CURLcode curl_easy_pause(struct Curl_easy *data, int 
action)
 
   /* if there's no error and we're not pausing both directions, we want
      to have this handle checked soon */
-  if(!result &&
-     ((newstate&(KEEP_RECV_PAUSE|KEEP_SEND_PAUSE)) !=
-      (KEEP_RECV_PAUSE|KEEP_SEND_PAUSE)) ) {
+  if((newstate & (KEEP_RECV_PAUSE|KEEP_SEND_PAUSE)) !=
+     (KEEP_RECV_PAUSE|KEEP_SEND_PAUSE)) {
     Curl_expire(data, 0, EXPIRE_RUN_NOW); /* get this handle going again */
     if(data->multi)
       Curl_update_timer(data->multi);
diff --git a/lib/ftp.c b/lib/ftp.c
index 4a005027b..250ac566c 100644
--- a/lib/ftp.c
+++ b/lib/ftp.c
@@ -523,7 +523,7 @@ static CURLcode AllowServerConnect(struct connectdata 
*conn, bool *connected)
   }
   else {
     /* Add timeout to multi handle and break out of the loop */
-    if(!result && *connected == FALSE) {
+    if(*connected == FALSE) {
       Curl_expire(data, data->set.accepttimeout > 0 ?
                   data->set.accepttimeout: DEFAULT_ACCEPT_TIMEOUT, 0);
     }
@@ -867,6 +867,10 @@ static CURLcode ftp_state_cwd(struct connectdata *conn)
     /* already done and fine */
     result = ftp_state_mdtm(conn);
   else {
+    /* FTPFILE_NOCWD with full path: expect ftpc->cwddone! */
+    DEBUGASSERT((conn->data->set.ftp_filemethod != FTPFILE_NOCWD) ||
+                !(ftpc->dirdepth && ftpc->dirs[0][0] == '/'));
+
     ftpc->count2 = 0; /* count2 counts failed CWDs */
 
     /* count3 is set to allow a MKD to fail once. In the case when first CWD
@@ -874,10 +878,9 @@ static CURLcode ftp_state_cwd(struct connectdata *conn)
        dir) this then allows for a second try to CWD to it */
     ftpc->count3 = (conn->data->set.ftp_create_missing_dirs == 2)?1:0;
 
-    if((conn->data->set.ftp_filemethod == FTPFILE_NOCWD) && !ftpc->cwdcount)
-      /* No CWD necessary */
-      result = ftp_state_mdtm(conn);
-    else if(conn->bits.reuse && ftpc->entrypath) {
+    if(conn->bits.reuse && ftpc->entrypath &&
+       /* no need to go to entrypath when we have an absolute path */
+       !(ftpc->dirdepth && ftpc->dirs[0][0] == '/')) {
       /* This is a re-used connection. Since we change directory to where the
          transfer is taking place, we must first get back to the original dir
          where we ended up after login: */
@@ -1436,31 +1439,37 @@ static CURLcode ftp_state_list(struct connectdata *conn)
      servers either... */
 
   /*
-     if FTPFILE_NOCWD was specified, we are currently in
-     the user's home directory, so we should add the path
+     if FTPFILE_NOCWD was specified, we should add the path
      as argument for the LIST / NLST / or custom command.
      Whether the server will support this, is uncertain.
 
      The other ftp_filemethods will CWD into dir/dir/ first and
      then just do LIST (in that case: nothing to do here)
   */
-  char *cmd, *lstArg, *slashPos;
-  const char *inpath = ftp->path;
-
-  lstArg = NULL;
-  if((data->set.ftp_filemethod == FTPFILE_NOCWD) &&
-     inpath && inpath[0] && strchr(inpath, '/')) {
-    size_t n = strlen(inpath);
-
-    /* Check if path does not end with /, as then we cut off the file part */
-    if(inpath[n - 1] != '/') {
-      /* chop off the file part if format is dir/dir/file */
-      slashPos = strrchr(inpath, '/');
-      n = slashPos - inpath;
-    }
-    result = Curl_urldecode(data, inpath, n, &lstArg, NULL, TRUE);
+  char *lstArg = NULL;
+  char *cmd;
+
+  if((data->set.ftp_filemethod == FTPFILE_NOCWD) && ftp->path) {
+    /* url-decode before evaluation: e.g. paths starting/ending with %2f */
+    const char *slashPos = NULL;
+    char *rawPath = NULL;
+    result = Curl_urldecode(data, ftp->path, 0, &rawPath, NULL, TRUE);
     if(result)
       return result;
+
+    slashPos = strrchr(rawPath, '/');
+    if(slashPos) {
+      /* chop off the file part if format is dir/file otherwise remove
+         the trailing slash for dir/dir/ except for absolute path / */
+      size_t n = slashPos - rawPath;
+      if(n == 0)
+        ++n;
+
+      lstArg = rawPath;
+      lstArg[n] = '\0';
+    }
+    else
+      free(rawPath);
   }
 
   cmd = aprintf("%s%s%s",
@@ -1469,15 +1478,12 @@ static CURLcode ftp_state_list(struct connectdata *conn)
                 (data->set.ftp_list_only?"NLST":"LIST"),
                 lstArg? " ": "",
                 lstArg? lstArg: "");
+  free(lstArg);
 
-  if(!cmd) {
-    free(lstArg);
+  if(!cmd)
     return CURLE_OUT_OF_MEMORY;
-  }
 
   result = Curl_pp_sendf(&conn->proto.ftpc.pp, "%s", cmd);
-
-  free(lstArg);
   free(cmd);
 
   if(result)
@@ -2242,9 +2248,25 @@ static CURLcode ftp_state_size_resp(struct connectdata 
*conn,
   char *buf = data->state.buffer;
 
   /* get the size from the ascii string: */
-  if(ftpcode == 213)
+  if(ftpcode == 213) {
+    /* To allow servers to prepend "rubbish" in the response string, we scan
+       for all the digits at the end of the response and parse only those as a
+       number. */
+    char *start = &buf[4];
+    char *fdigit = strchr(start, '\r');
+    if(fdigit) {
+      do
+        fdigit--;
+      while(ISDIGIT(*fdigit) && (fdigit > start));
+      if(!ISDIGIT(*fdigit))
+        fdigit++;
+    }
+    else
+      fdigit = start;
     /* ignores parsing errors, which will make the size remain unknown */
-    (void)curlx_strtoofft(buf + 4, NULL, 0, &filesize);
+    (void)curlx_strtoofft(fdigit, NULL, 0, &filesize);
+
+  }
 
   if(instate == FTP_SIZE) {
 #ifdef CURL_FTP_HTTPSTYLE_HEAD
@@ -3115,7 +3137,8 @@ static CURLcode ftp_done(struct connectdata *conn, 
CURLcode status,
   ssize_t nread;
   int ftpcode;
   CURLcode result = CURLE_OK;
-  char *path = NULL;
+  char *rawPath = NULL;
+  size_t pathLen = 0;
 
   if(!ftp)
     return CURLE_OK;
@@ -3153,9 +3176,6 @@ static CURLcode ftp_done(struct connectdata *conn, 
CURLcode status,
     break;
   }
 
-  /* now store a copy of the directory we are in */
-  free(ftpc->prevpath);
-
   if(data->state.wildcardmatch) {
     if(data->set.chunk_end && ftpc->file) {
       Curl_set_in_callback(data, true);
@@ -3166,41 +3186,41 @@ static CURLcode ftp_done(struct connectdata *conn, 
CURLcode status,
   }
 
   if(!result)
-    /* get the "raw" path */
-    result = Curl_urldecode(data, ftp->path, 0, &path, NULL, TRUE);
+    /* get the url-decoded "raw" path */
+    result = Curl_urldecode(data, ftp->path, 0, &rawPath, &pathLen, TRUE);
   if(result) {
     /* We can limp along anyway (and should try to since we may already be in
      * the error path) */
     ftpc->ctl_valid = FALSE; /* mark control connection as bad */
     connclose(conn, "FTP: out of memory!"); /* mark for connection closure */
+    free(ftpc->prevpath);
     ftpc->prevpath = NULL; /* no path remembering */
   }
-  else {
-    size_t flen = ftpc->file?strlen(ftpc->file):0; /* file is "raw" already */
-    size_t dlen = strlen(path)-flen;
-    if(!ftpc->cwdfail) {
-      ftpc->prevmethod = data->set.ftp_filemethod;
-      if(dlen && (data->set.ftp_filemethod != FTPFILE_NOCWD)) {
-        ftpc->prevpath = path;
-        if(flen)
-          /* if 'path' is not the whole string */
-          ftpc->prevpath[dlen] = 0; /* terminate */
+  else { /* remember working directory for connection reuse */
+    if((data->set.ftp_filemethod == FTPFILE_NOCWD) && (rawPath[0] == '/'))
+      free(rawPath); /* full path => no CWDs happened => keep ftpc->prevpath */
+    else {
+      free(ftpc->prevpath);
+
+      if(!ftpc->cwdfail) {
+        if(data->set.ftp_filemethod == FTPFILE_NOCWD)
+          pathLen = 0; /* relative path => working directory is FTP home */
+        else
+          pathLen -= ftpc->file?strlen(ftpc->file):0; /* file is url-decoded */
+
+        rawPath[pathLen] = '\0';
+        ftpc->prevpath = rawPath;
       }
       else {
-        free(path);
-        /* we never changed dir */
-        ftpc->prevpath = strdup("");
-        if(!ftpc->prevpath)
-          return CURLE_OUT_OF_MEMORY;
+        free(rawPath);
+        ftpc->prevpath = NULL; /* no path */
       }
-      if(ftpc->prevpath)
-        infof(data, "Remembering we are in dir \"%s\"\n", ftpc->prevpath);
-    }
-    else {
-      ftpc->prevpath = NULL; /* no path */
-      free(path);
     }
+
+    if(ftpc->prevpath)
+      infof(data, "Remembering we are in dir \"%s\"\n", ftpc->prevpath);
   }
+
   /* free the dir tree and file parts */
   freedirs(ftpc);
 
@@ -3513,14 +3533,13 @@ static CURLcode ftp_do_more(struct connectdata *conn, 
int *completep)
 
     /* if we got an error or if we don't wait for a data connection return
        immediately */
-    if(result || (ftpc->wait_data_conn != TRUE))
+    if(result || !ftpc->wait_data_conn)
       return result;
 
-    if(ftpc->wait_data_conn)
-      /* if we reach the end of the FTP state machine here, *complete will be
-         TRUE but so is ftpc->wait_data_conn, which says we need to wait for
-         the data connection and therefore we're not actually complete */
-      *completep = 0;
+    /* if we reach the end of the FTP state machine here, *complete will be
+       TRUE but so is ftpc->wait_data_conn, which says we need to wait for the
+       data connection and therefore we're not actually complete */
+    *completep = 0;
   }
 
   if(ftp->transfer <= FTPTRANSFER_INFO) {
@@ -3554,13 +3573,8 @@ static CURLcode ftp_do_more(struct connectdata *conn, 
int *completep)
         return result;
 
       result = ftp_multi_statemach(conn, &complete);
-      if(ftpc->wait_data_conn)
-        /* if we reach the end of the FTP state machine here, *complete will be
-           TRUE but so is ftpc->wait_data_conn, which says we need to wait for
-           the data connection and therefore we're not actually complete */
-        *completep = 0;
-      else
-        *completep = (int)complete;
+      /* ftpc->wait_data_conn is always false here */
+      *completep = (int)complete;
     }
     else {
       /* download */
@@ -3600,10 +3614,8 @@ static CURLcode ftp_do_more(struct connectdata *conn, 
int *completep)
     return result;
   }
 
-  if(!result && (ftp->transfer != FTPTRANSFER_BODY))
-    /* no data to transfer. FIX: it feels like a kludge to have this here
-       too! */
-    Curl_setup_transfer(data, -1, -1, FALSE, -1);
+  /* no data to transfer */
+  Curl_setup_transfer(data, -1, -1, FALSE, -1);
 
   if(!ftpc->wait_data_conn) {
     /* no waiting for the data connection so this is now complete */
@@ -4080,186 +4092,142 @@ CURLcode ftp_parse_url_path(struct connectdata *conn)
   /* the ftp struct is already inited in ftp_connect() */
   struct FTP *ftp = data->req.protop;
   struct ftp_conn *ftpc = &conn->proto.ftpc;
-  const char *slash_pos;  /* position of the first '/' char in curpos */
-  const char *path_to_use = ftp->path;
-  const char *cur_pos;
-  const char *filename = NULL;
-
-  cur_pos = path_to_use; /* current position in path. point at the begin of
-                            next path component */
+  const char *slashPos = NULL;
+  const char *fileName = NULL;
+  CURLcode result = CURLE_OK;
+  char *rawPath = NULL; /* url-decoded "raw" path */
+  size_t pathLen = 0;
 
   ftpc->ctl_valid = FALSE;
   ftpc->cwdfail = FALSE;
 
-  switch(data->set.ftp_filemethod) {
-  case FTPFILE_NOCWD:
-    /* fastest, but less standard-compliant */
-
-    /*
-      The best time to check whether the path is a file or directory is right
-      here. so:
+  /* url-decode ftp path before further evaluation */
+  result = Curl_urldecode(data, ftp->path, 0, &rawPath, &pathLen, TRUE);
+  if(result)
+    return result;
 
-      the first condition in the if() right here, is there just in case
-      someone decides to set path to NULL one day
-   */
-    if(path_to_use[0] &&
-       (path_to_use[strlen(path_to_use) - 1] != '/') )
-      filename = path_to_use;  /* this is a full file path */
-    /*
-      else {
-        ftpc->file is not used anywhere other than for operations on a file.
-        In other words, never for directory operations.
-        So we can safely leave filename as NULL here and use it as a
-        argument in dir/file decisions.
-      }
-    */
-    break;
+  switch(data->set.ftp_filemethod) {
+    case FTPFILE_NOCWD: /* fastest, but less standard-compliant */
 
-  case FTPFILE_SINGLECWD:
-    /* get the last slash */
-    if(!path_to_use[0]) {
-      /* no dir, no file */
-      ftpc->dirdepth = 0;
+      if((pathLen > 0) && (rawPath[pathLen - 1] != '/'))
+          fileName = rawPath;  /* this is a full file path */
+      /*
+        else: ftpc->file is not used anywhere other than for operations on
+              a file. In other words, never for directory operations.
+              So we can safely leave filename as NULL here and use it as a
+              argument in dir/file decisions.
+      */
       break;
-    }
-    slash_pos = strrchr(cur_pos, '/');
-    if(slash_pos || !*cur_pos) {
-      size_t dirlen = slash_pos-cur_pos;
-      CURLcode result;
 
-      ftpc->dirs = calloc(1, sizeof(ftpc->dirs[0]));
-      if(!ftpc->dirs)
-        return CURLE_OUT_OF_MEMORY;
+    case FTPFILE_SINGLECWD:
+      slashPos = strrchr(rawPath, '/');
+      if(slashPos) {
+        /* get path before last slash, except for / */
+        size_t dirlen = slashPos - rawPath;
+        if(dirlen == 0)
+            dirlen++;
+
+        ftpc->dirs = calloc(1, sizeof(ftpc->dirs[0]));
+        if(!ftpc->dirs) {
+          free(rawPath);
+          return CURLE_OUT_OF_MEMORY;
+        }
 
-      if(!dirlen)
-        dirlen++;
+        ftpc->dirs[0] = calloc(1, dirlen + 1);
+        if(!ftpc->dirs[0]) {
+          free(rawPath);
+          return CURLE_OUT_OF_MEMORY;
+        }
 
-      result = Curl_urldecode(conn->data, slash_pos ? cur_pos : "/",
-                              slash_pos ? dirlen : 1,
-                              &ftpc->dirs[0], NULL,
-                              TRUE);
-      if(result) {
-        freedirs(ftpc);
-        return result;
+        strncpy(ftpc->dirs[0], rawPath, dirlen);
+        ftpc->dirdepth = 1; /* we consider it to be a single dir */
+        fileName = slashPos + 1; /* rest is file name */
       }
-      ftpc->dirdepth = 1; /* we consider it to be a single dir */
-      filename = slash_pos ? slash_pos + 1 : cur_pos; /* rest is file name */
-    }
-    else
-      filename = cur_pos;  /* this is a file name only */
-    break;
+      else
+        fileName = rawPath; /* file name only (or empty) */
+      break;
 
-  default: /* allow pretty much anything */
-  case FTPFILE_MULTICWD:
-    ftpc->dirdepth = 0;
-    ftpc->diralloc = 5; /* default dir depth to allocate */
-    ftpc->dirs = calloc(ftpc->diralloc, sizeof(ftpc->dirs[0]));
-    if(!ftpc->dirs)
-      return CURLE_OUT_OF_MEMORY;
+    default: /* allow pretty much anything */
+    case FTPFILE_MULTICWD: {
+      /* current position: begin of next path component */
+      const char *curPos = rawPath;
+
+      int dirAlloc = 0; /* number of entries allocated for the 'dirs' array */
+      const char *str = rawPath;
+      for(; *str != 0; ++str)
+        if (*str == '/')
+          ++dirAlloc;
+
+      if(dirAlloc > 0) {
+        ftpc->dirs = calloc(dirAlloc, sizeof(ftpc->dirs[0]));
+        if(!ftpc->dirs) {
+          free(rawPath);
+          return CURLE_OUT_OF_MEMORY;
+        }
+
+        /* parse the URL path into separate path components */
+        while((slashPos = strchr(curPos, '/')) != NULL) {
+          size_t compLen = slashPos - curPos;
+
+          /* path starts with a slash: add that as a directory */
+          if((compLen == 0) && (ftpc->dirdepth == 0))
+            ++compLen;
 
-    /* we have a special case for listing the root dir only */
-    if(!strcmp(path_to_use, "/")) {
-      cur_pos++; /* make it point to the zero byte */
-      ftpc->dirs[0] = strdup("/");
-      ftpc->dirdepth++;
-    }
-    else {
-      /* parse the URL path into separate path components */
-      while((slash_pos = strchr(cur_pos, '/')) != NULL) {
-        /* 1 or 0 pointer offset to indicate absolute directory */
-        ssize_t absolute_dir = ((cur_pos - ftp->path > 0) &&
-                                (ftpc->dirdepth == 0))?1:0;
-
-        /* seek out the next path component */
-        if(slash_pos-cur_pos) {
           /* we skip empty path components, like "x//y" since the FTP command
              CWD requires a parameter and a non-existent parameter a) doesn't
              work on many servers and b) has no effect on the others. */
-          size_t len = slash_pos - cur_pos + absolute_dir;
-          CURLcode result =
-            Curl_urldecode(conn->data, cur_pos - absolute_dir, len,
-                           &ftpc->dirs[ftpc->dirdepth], NULL,
-                           TRUE);
-          if(result) {
-            freedirs(ftpc);
-            return result;
-          }
-        }
-        else {
-          cur_pos = slash_pos + 1; /* jump to the rest of the string */
-          if(!ftpc->dirdepth) {
-            /* path starts with a slash, add that as a directory */
-            ftpc->dirs[ftpc->dirdepth] = strdup("/");
-            if(!ftpc->dirs[ftpc->dirdepth++]) { /* run out of memory ... */
-              failf(data, "no memory");
-              freedirs(ftpc);
+          if(compLen > 0) {
+            char *comp = calloc(1, compLen + 1);
+            if(!comp) {
+              free(rawPath);
               return CURLE_OUT_OF_MEMORY;
             }
+            strncpy(comp, curPos, compLen);
+            ftpc->dirs[ftpc->dirdepth++] = comp;
           }
-          continue;
-        }
-
-        cur_pos = slash_pos + 1; /* jump to the rest of the string */
-        if(++ftpc->dirdepth >= ftpc->diralloc) {
-          /* enlarge array */
-          char **bigger;
-          ftpc->diralloc *= 2; /* double the size each time */
-          bigger = realloc(ftpc->dirs, ftpc->diralloc * sizeof(ftpc->dirs[0]));
-          if(!bigger) {
-            freedirs(ftpc);
-            return CURLE_OUT_OF_MEMORY;
-          }
-          ftpc->dirs = bigger;
+          curPos = slashPos + 1;
         }
       }
+      DEBUGASSERT(ftpc->dirdepth <= dirAlloc);
+      fileName = curPos; /* the rest is the file name (or empty) */
     }
-    filename = cur_pos;  /* the rest is the file name */
     break;
   } /* switch */
 
-  if(filename && *filename) {
-    CURLcode result =
-      Curl_urldecode(conn->data, filename, 0,  &ftpc->file, NULL, TRUE);
-
-    if(result) {
-      freedirs(ftpc);
-      return result;
-    }
-  }
+  if(fileName && *fileName)
+    ftpc->file = strdup(fileName);
   else
-    ftpc->file = NULL; /* instead of point to a zero byte, we make it a NULL
-                          pointer */
+    ftpc->file = NULL; /* instead of point to a zero byte,
+                            we make it a NULL pointer */
 
   if(data->set.upload && !ftpc->file && (ftp->transfer == FTPTRANSFER_BODY)) {
     /* We need a file name when uploading. Return error! */
     failf(data, "Uploading to a URL without a file name!");
+    free(rawPath);
     return CURLE_URL_MALFORMAT;
   }
 
   ftpc->cwddone = FALSE; /* default to not done */
 
-  if(ftpc->prevpath) {
-    /* prevpath is "raw" so we convert the input path before we compare the
-       strings */
-    size_t dlen;
-    char *path;
-    CURLcode result =
-      Curl_urldecode(conn->data, ftp->path, 0, &path, &dlen, TRUE);
-    if(result) {
-      freedirs(ftpc);
-      return result;
-    }
+  if((data->set.ftp_filemethod == FTPFILE_NOCWD) && (rawPath[0] == '/'))
+    ftpc->cwddone = TRUE; /* skip CWD for absolute paths */
+  else { /* newly created FTP connections are already in entry path */
+    const char *oldPath = conn->bits.reuse ? ftpc->prevpath : "";
+    if(oldPath) {
+      size_t n = pathLen;
+      if(data->set.ftp_filemethod == FTPFILE_NOCWD)
+        n = 0; /* CWD to entry for relative paths */
+      else
+        n -= ftpc->file?strlen(ftpc->file):0;
 
-    dlen -= ftpc->file?strlen(ftpc->file):0;
-    if((dlen == strlen(ftpc->prevpath)) &&
-       !strncmp(path, ftpc->prevpath, dlen) &&
-       (ftpc->prevmethod == data->set.ftp_filemethod)) {
-      infof(data, "Request has same path as previous transfer\n");
-      ftpc->cwddone = TRUE;
+      if((strlen(oldPath) == n) && !strncmp(rawPath, oldPath, n)) {
+        infof(data, "Request has same path as previous transfer\n");
+        ftpc->cwddone = TRUE;
+      }
     }
-    free(path);
   }
 
+  free(rawPath);
   return CURLE_OK;
 }
 
diff --git a/lib/ftp.h b/lib/ftp.h
index 828d69a21..2c88d568c 100644
--- a/lib/ftp.h
+++ b/lib/ftp.h
@@ -121,8 +121,7 @@ struct ftp_conn {
   char *entrypath; /* the PWD reply when we logged on */
   char **dirs;   /* realloc()ed array for path components */
   int dirdepth;  /* number of entries used in the 'dirs' array */
-  int diralloc;  /* number of entries allocated for the 'dirs' array */
-  char *file;    /* decoded file */
+  char *file;    /* url-decoded file name (or path) */
   bool dont_check;  /* Set to TRUE to prevent the final (post-transfer)
                        file size and 226/250 status check. It should still
                        read the line, just ignore the result. */
@@ -135,8 +134,7 @@ struct ftp_conn {
   bool cwdfail;     /* set TRUE if a CWD command fails, as then we must prevent
                        caching the current directory */
   bool wait_data_conn; /* this is set TRUE if data connection is waited */
-  char *prevpath;   /* conn->path from the previous transfer */
-  curl_ftpfile prevmethod; /* ftp method in previous transfer  */
+  char *prevpath;   /* url-decoded conn->path from the previous transfer */
   char transfertype; /* set by ftp_transfertype for use by Curl_client_write()a
                         and others (A/I or zero) */
   int count1; /* general purpose counter for the state machine */
diff --git a/lib/ftplistparser.c b/lib/ftplistparser.c
index 5893bc51f..a143a7f2e 100644
--- a/lib/ftplistparser.c
+++ b/lib/ftplistparser.c
@@ -5,7 +5,7 @@
  *                            | (__| |_| |  _ <| |___
  *                             \___|\___/|_| \_\_____|
  *
- * Copyright (C) 1998 - 2018, Daniel Stenberg, <address@hidden>, et al.
+ * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
  *
  * This software is licensed as described in the file COPYING, which
  * you should have received as part of this distribution. The terms
diff --git a/lib/hostcheck.c b/lib/hostcheck.c
index 115d24b2e..9e0db05fa 100644
--- a/lib/hostcheck.c
+++ b/lib/hostcheck.c
@@ -5,7 +5,7 @@
  *                            | (__| |_| |  _ <| |___
  *                             \___|\___/|_| \_\_____|
  *
- * Copyright (C) 1998 - 2018, Daniel Stenberg, <address@hidden>, et al.
+ * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
  *
  * This software is licensed as described in the file COPYING, which
  * you should have received as part of this distribution. The terms
diff --git a/lib/hostip.c b/lib/hostip.c
index bd532a891..d4e8f9366 100644
--- a/lib/hostip.c
+++ b/lib/hostip.c
@@ -749,7 +749,7 @@ clean_up:
                                             conn->created) / 1000;
 
     /* the alarm period is counted in even number of seconds */
-    unsigned long alarm_set = prev_alarm - elapsed_secs;
+    unsigned long alarm_set = (unsigned long)(prev_alarm - elapsed_secs);
 
     if(!alarm_set ||
        ((alarm_set >= 0x80000000) && (prev_alarm < 0x80000000)) ) {
diff --git a/lib/http.c b/lib/http.c
index 4ef9c66a6..7d3f7b021 100644
--- a/lib/http.c
+++ b/lib/http.c
@@ -450,9 +450,6 @@ static CURLcode http_perhapsrewind(struct connectdata *conn)
     /* figure out how much data we are expected to send */
     switch(data->set.httpreq) {
     case HTTPREQ_POST:
-      if(data->state.infilesize != -1)
-        expectsend = data->state.infilesize;
-      break;
     case HTTPREQ_PUT:
       if(data->state.infilesize != -1)
         expectsend = data->state.infilesize;
@@ -2679,7 +2676,7 @@ CURLcode Curl_http(struct connectdata *conn, bool *done)
     struct Cookie *co = NULL; /* no cookies from start */
     int count = 0;
 
-    if(data->cookies) {
+    if(data->cookies && data->state.cookie_engine) {
       Curl_share_lock(data, CURL_LOCK_DATA_COOKIE, CURL_LOCK_ACCESS_SINGLE);
       co = Curl_cookie_getlist(data->cookies,
                                conn->allocptr.cookiehost?
@@ -3044,8 +3041,7 @@ CURLcode Curl_http(struct connectdata *conn, bool *done)
       failf(data, "Failed sending HTTP request");
     else
       /* HTTP GET/HEAD download: */
-      Curl_setup_transfer(data, FIRSTSOCKET, -1, TRUE,
-                          http->postdata?FIRSTSOCKET:-1);
+      Curl_setup_transfer(data, FIRSTSOCKET, -1, TRUE, -1);
   }
   if(result)
     return result;
@@ -4017,7 +4013,7 @@ CURLcode Curl_http_readwrite_headers(struct Curl_easy 
*data,
         data->state.resume_from = 0; /* get everything */
     }
 #if !defined(CURL_DISABLE_COOKIES)
-    else if(data->cookies &&
+    else if(data->cookies && data->state.cookie_engine &&
             checkprefix("Set-Cookie:", k->p)) {
       Curl_share_lock(data, CURL_LOCK_DATA_COOKIE,
                       CURL_LOCK_ACCESS_SINGLE);
@@ -4058,7 +4054,7 @@ CURLcode Curl_http_readwrite_headers(struct Curl_easy 
*data,
       if(result)
         return result;
     }
-  #ifdef USE_SPNEGO
+#ifdef USE_SPNEGO
     else if(checkprefix("Persistent-Auth", k->p)) {
       struct negotiatedata *negdata = &conn->negotiate;
       struct auth *authp = &data->state.authhost;
@@ -4066,14 +4062,15 @@ CURLcode Curl_http_readwrite_headers(struct Curl_easy 
*data,
         char *persistentauth = Curl_copy_header_value(k->p);
         if(!persistentauth)
           return CURLE_OUT_OF_MEMORY;
-        negdata->noauthpersist = checkprefix("false", persistentauth);
+        negdata->noauthpersist = checkprefix("false", persistentauth)?
+          TRUE:FALSE;
         negdata->havenoauthpersist = TRUE;
         infof(data, "Negotiate: noauthpersist -> %d, header part: %s",
           negdata->noauthpersist, persistentauth);
         free(persistentauth);
       }
     }
-  #endif
+#endif
     else if((k->httpcode >= 300 && k->httpcode < 400) &&
             checkprefix("Location:", k->p) &&
             !data->req.location) {
diff --git a/lib/http.h b/lib/http.h
index b27c3c861..9b446e8aa 100644
--- a/lib/http.h
+++ b/lib/http.h
@@ -83,11 +83,6 @@ CURLcode Curl_http(struct connectdata *conn, bool *done);
 CURLcode Curl_http_done(struct connectdata *, CURLcode, bool premature);
 CURLcode Curl_http_connect(struct connectdata *conn, bool *done);
 
-/* The following functions are defined in http_chunks.c */
-void Curl_httpchunk_init(struct connectdata *conn);
-CHUNKcode Curl_httpchunk_read(struct connectdata *conn, char *datap,
-                              ssize_t length, ssize_t *wrote);
-
 /* These functions are in http.c */
 CURLcode Curl_http_input_auth(struct connectdata *conn, bool proxy,
                               const char *auth);
diff --git a/lib/http2.c b/lib/http2.c
index 8cdcf968c..631c92da7 100644
--- a/lib/http2.c
+++ b/lib/http2.c
@@ -496,16 +496,14 @@ static struct Curl_easy *duphandle(struct Curl_easy *data)
     /* setup the request struct */
     struct HTTP *http = calloc(1, sizeof(struct HTTP));
     if(!http) {
-      (void)Curl_close(second);
-      second = NULL;
+      (void)Curl_close(&second);
     }
     else {
       second->req.protop = http;
       http->header_recvbuf = Curl_add_buffer_init();
       if(!http->header_recvbuf) {
         free(http);
-        (void)Curl_close(second);
-        second = NULL;
+        (void)Curl_close(&second);
       }
       else {
         Curl_http2_setup_req(second);
@@ -547,7 +545,7 @@ static int push_promise(struct Curl_easy *data,
     stream = data->req.protop;
     if(!stream) {
       failf(data, "Internal NULL stream!\n");
-      (void)Curl_close(newhandle);
+      (void)Curl_close(&newhandle);
       rv = 1;
       goto fail;
     }
@@ -569,7 +567,7 @@ static int push_promise(struct Curl_easy *data,
       /* denied, kill off the new handle again */
       http2_stream_free(newhandle->req.protop);
       newhandle->req.protop = NULL;
-      (void)Curl_close(newhandle);
+      (void)Curl_close(&newhandle);
       goto fail;
     }
 
@@ -585,7 +583,7 @@ static int push_promise(struct Curl_easy *data,
       infof(data, "failed to add handle to multi\n");
       http2_stream_free(newhandle->req.protop);
       newhandle->req.protop = NULL;
-      Curl_close(newhandle);
+      Curl_close(&newhandle);
       rv = 1;
       goto fail;
     }
@@ -848,6 +846,7 @@ static int on_stream_close(nghttp2_session *session, 
int32_t stream_id,
     stream->closed = TRUE;
     httpc = &conn->proto.httpc;
     drain_this(data_s, httpc);
+    Curl_expire(data_s, 0, EXPIRE_RUN_NOW);
     httpc->error_code = error_code;
 
     /* remove the entry from the hash as the stream is now gone */
@@ -967,7 +966,9 @@ static int on_header(nghttp2_session *session, const 
nghttp2_frame *frame,
       if(!check)
         /* no memory */
         return NGHTTP2_ERR_CALLBACK_FAILURE;
-      if(!Curl_strcasecompare(check, (const char *)value)) {
+      if(!Curl_strcasecompare(check, (const char *)value) &&
+         ((conn->remote_port != conn->given->defport) ||
+          !Curl_strcasecompare(conn->host.name, (const char *)value))) {
         /* This is push is not for the same authority that was asked for in
          * the URL. RFC 7540 section 8.2 says: "A client MUST treat a
          * PUSH_PROMISE for which the server is not authoritative as a stream
@@ -1157,7 +1158,7 @@ static void populate_settings(struct connectdata *conn,
   nghttp2_settings_entry *iv = httpc->local_settings;
 
   iv[0].settings_id = NGHTTP2_SETTINGS_MAX_CONCURRENT_STREAMS;
-  iv[0].value = 100;
+  iv[0].value = (uint32_t)Curl_multi_max_concurrent_streams(conn->data->multi);
 
   iv[1].settings_id = NGHTTP2_SETTINGS_INITIAL_WINDOW_SIZE;
   iv[1].value = HTTP2_HUGE_WINDOW_SIZE;
@@ -1535,6 +1536,7 @@ static int h2_session_send(struct Curl_easy *data,
 
     H2BUGF(infof(data, "Queuing PRIORITY on stream %u (easy %p)\n",
                  stream->stream_id, data));
+    DEBUGASSERT(stream->stream_id != -1);
     rv = nghttp2_submit_priority(h2, NGHTTP2_FLAG_NONE, stream->stream_id,
                                  &pri_spec);
     if(rv)
@@ -1659,6 +1661,9 @@ static ssize_t http2_recv(struct connectdata *conn, int 
sockindex,
        socket is not read.  But it seems that usually streams are
        notified with its drain property, and socket is read again
        quickly. */
+    if(stream->closed)
+      /* closed overrides paused */
+      return 0;
     H2BUGF(infof(data, "stream %x is paused, pause id: %x\n",
                  stream->stream_id, httpc->pause_stream_id));
     *err = CURLE_AGAIN;
@@ -1773,8 +1778,9 @@ static ssize_t http2_recv(struct connectdata *conn, int 
sockindex,
    field list. */
 #define AUTHORITY_DST_IDX 3
 
+/* USHRT_MAX is 65535 == 0xffff */
 #define HEADER_OVERFLOW(x) \
-  (x.namelen > (uint16_t)-1 || x.valuelen > (uint16_t)-1 - x.namelen)
+  (x.namelen > 0xffff || x.valuelen > 0xffff - x.namelen)
 
 /*
  * Check header memory for the token "trailers".
@@ -2024,8 +2030,10 @@ static ssize_t http2_send(struct connectdata *conn, int 
sockindex,
       nva[i].namelen = strlen((char *)nva[i].name);
     }
     else {
-      nva[i].name = (unsigned char *)hdbuf;
       nva[i].namelen = (size_t)(end - hdbuf);
+      /* Lower case the header name for HTTP/2 */
+      Curl_strntolower((char *)hdbuf, hdbuf, nva[i].namelen);
+      nva[i].name = (unsigned char *)hdbuf;
     }
     hdbuf = end + 1;
     while(*hdbuf == ' ' || *hdbuf == '\t')
@@ -2135,17 +2143,14 @@ static ssize_t http2_send(struct connectdata *conn, int 
sockindex,
     return -1;
   }
 
-  if(stream->stream_id != -1) {
-    /* If whole HEADERS frame was sent off to the underlying socket,
-       the nghttp2 library calls data_source_read_callback. But only
-       it found that no data available, so it deferred the DATA
-       transmission. Which means that nghttp2_session_want_write()
-       returns 0 on http2_perform_getsock(), which results that no
-       writable socket check is performed. To workaround this, we
-       issue nghttp2_session_resume_data() here to bring back DATA
-       transmission from deferred state. */
-    nghttp2_session_resume_data(h2, stream->stream_id);
-  }
+  /* If whole HEADERS frame was sent off to the underlying socket, the nghttp2
+     library calls data_source_read_callback. But only it found that no data
+     available, so it deferred the DATA transmission. Which means that
+     nghttp2_session_want_write() returns 0 on http2_perform_getsock(), which
+     results that no writable socket check is performed. To workaround this,
+     we issue nghttp2_session_resume_data() here to bring back DATA
+     transmission from deferred state. */
+  nghttp2_session_resume_data(h2, stream->stream_id);
 
   return len;
 
diff --git a/lib/http_chunks.c b/lib/http_chunks.c
index 18dfcb282..b6ffa4185 100644
--- a/lib/http_chunks.c
+++ b/lib/http_chunks.c
@@ -5,7 +5,7 @@
  *                            | (__| |_| |  _ <| |___
  *                             \___|\___/|_| \_\_____|
  *
- * Copyright (C) 1998 - 2018, Daniel Stenberg, <address@hidden>, et al.
+ * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
  *
  * This software is licensed as described in the file COPYING, which
  * you should have received as part of this distribution. The terms
@@ -109,7 +109,8 @@ void Curl_httpchunk_init(struct connectdata *conn)
 CHUNKcode Curl_httpchunk_read(struct connectdata *conn,
                               char *datap,
                               ssize_t datalen,
-                              ssize_t *wrotep)
+                              ssize_t *wrotep,
+                              CURLcode *extrap)
 {
   CURLcode result = CURLE_OK;
   struct Curl_easy *data = conn->data;
@@ -125,8 +126,10 @@ CHUNKcode Curl_httpchunk_read(struct connectdata *conn,
      chunk read process, to properly calculate the content length*/
   if(data->set.http_te_skip && !k->ignorebody) {
     result = Curl_client_write(conn, CLIENTWRITE_BODY, datap, datalen);
-    if(result)
-      return CHUNKE_WRITE_ERROR;
+    if(result) {
+      *extrap = result;
+      return CHUNKE_PASSTHRU_ERROR;
+    }
   }
 
   while(length) {
@@ -197,8 +200,10 @@ CHUNKcode Curl_httpchunk_read(struct connectdata *conn,
         else
           result = Curl_client_write(conn, CLIENTWRITE_BODY, datap, piece);
 
-        if(result)
-          return CHUNKE_WRITE_ERROR;
+        if(result) {
+          *extrap = result;
+          return CHUNKE_PASSTHRU_ERROR;
+        }
       }
 
       *wrote += piece;
@@ -244,8 +249,10 @@ CHUNKcode Curl_httpchunk_read(struct connectdata *conn,
           if(!data->set.http_te_skip) {
             result = Curl_client_write(conn, CLIENTWRITE_HEADER,
                                        conn->trailer, conn->trlPos);
-            if(result)
-              return CHUNKE_WRITE_ERROR;
+            if(result) {
+              *extrap = result;
+              return CHUNKE_PASSTHRU_ERROR;
+            }
           }
           conn->trlPos = 0;
           ch->state = CHUNK_TRAILER_CR;
@@ -339,8 +346,9 @@ const char *Curl_chunked_strerror(CHUNKcode code)
     return "Illegal or missing hexadecimal sequence";
   case CHUNKE_BAD_CHUNK:
     return "Malformed encoding found";
-  case CHUNKE_WRITE_ERROR:
-    return "Write error";
+  case CHUNKE_PASSTHRU_ERROR:
+    DEBUGASSERT(0); /* never used */
+    return "";
   case CHUNKE_BAD_ENCODING:
     return "Bad content-encoding found";
   case CHUNKE_OUT_OF_MEMORY:
diff --git a/lib/http_chunks.h b/lib/http_chunks.h
index b969c5590..8f4a33c8e 100644
--- a/lib/http_chunks.h
+++ b/lib/http_chunks.h
@@ -7,7 +7,7 @@
  *                            | (__| |_| |  _ <| |___
  *                             \___|\___/|_| \_\_____|
  *
- * Copyright (C) 1998 - 2014, Daniel Stenberg, <address@hidden>, et al.
+ * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
  *
  * This software is licensed as described in the file COPYING, which
  * you should have received as part of this distribution. The terms
@@ -21,6 +21,9 @@
  * KIND, either express or implied.
  *
  ***************************************************************************/
+
+struct connectdata;
+
 /*
  * The longest possible hexadecimal number we support in a chunked transfer.
  * Weird enough, RFC2616 doesn't set a maximum size! Since we use strtoul()
@@ -71,9 +74,9 @@ typedef enum {
   CHUNKE_TOO_LONG_HEX = 1,
   CHUNKE_ILLEGAL_HEX,
   CHUNKE_BAD_CHUNK,
-  CHUNKE_WRITE_ERROR,
   CHUNKE_BAD_ENCODING,
   CHUNKE_OUT_OF_MEMORY,
+  CHUNKE_PASSTHRU_ERROR, /* Curl_httpchunk_read() returns a CURLcode to use */
   CHUNKE_LAST
 } CHUNKcode;
 
@@ -87,4 +90,10 @@ struct Curl_chunker {
   size_t dataleft; /* untouched data amount at the end of the last buffer */
 };
 
+/* The following functions are defined in http_chunks.c */
+void Curl_httpchunk_init(struct connectdata *conn);
+CHUNKcode Curl_httpchunk_read(struct connectdata *conn, char *datap,
+                              ssize_t length, ssize_t *wrote,
+                              CURLcode *passthru);
+
 #endif /* HEADER_CURL_HTTP_CHUNKS_H */
diff --git a/lib/http_proxy.c b/lib/http_proxy.c
index 7d8c5eb63..b6f31af32 100644
--- a/lib/http_proxy.c
+++ b/lib/http_proxy.c
@@ -327,7 +327,7 @@ static CURLcode CONNECT(struct connectdata *conn,
     { /* READING RESPONSE PHASE */
       int error = SELECT_OK;
 
-      while(s->keepon && !error) {
+      while(s->keepon) {
         ssize_t gotbytes;
 
         /* make sure we have space to read more data */
@@ -384,11 +384,12 @@ static CURLcode CONNECT(struct connectdata *conn,
             /* chunked-encoded body, so we need to do the chunked dance
                properly to know when the end of the body is reached */
             CHUNKcode r;
+            CURLcode extra;
             ssize_t tookcareof = 0;
 
             /* now parse the chunked piece of data so that we can
                properly tell when the stream ends */
-            r = Curl_httpchunk_read(conn, s->ptr, 1, &tookcareof);
+            r = Curl_httpchunk_read(conn, s->ptr, 1, &tookcareof, &extra);
             if(r == CHUNKE_STOP) {
               /* we're done reading chunks! */
               infof(data, "chunk reading DONE\n");
@@ -455,6 +456,7 @@ static CURLcode CONNECT(struct connectdata *conn,
             }
             else if(s->chunked_encoding) {
               CHUNKcode r;
+              CURLcode extra;
 
               infof(data, "Ignore chunked response-body\n");
 
@@ -472,7 +474,8 @@ static CURLcode CONNECT(struct connectdata *conn,
 
               /* now parse the chunked piece of data so that we can
                  properly tell when the stream ends */
-              r = Curl_httpchunk_read(conn, s->line_start + 1, 1, &gotbytes);
+              r = Curl_httpchunk_read(conn, s->line_start + 1, 1, &gotbytes,
+                                      &extra);
               if(r == CHUNKE_STOP) {
                 /* we're done reading chunks! */
                 infof(data, "chunk reading DONE\n");
diff --git a/lib/imap.c b/lib/imap.c
index ee9e04aa1..40d804be9 100644
--- a/lib/imap.c
+++ b/lib/imap.c
@@ -1306,6 +1306,7 @@ static CURLcode imap_statemach_act(struct connectdata 
*conn)
       break;
 
     case IMAP_LIST:
+    case IMAP_SEARCH:
       result = imap_state_listsearch_resp(conn, imapcode, imapc->state);
       break;
 
@@ -1329,10 +1330,6 @@ static CURLcode imap_statemach_act(struct connectdata 
*conn)
       result = imap_state_append_final_resp(conn, imapcode, imapc->state);
       break;
 
-    case IMAP_SEARCH:
-      result = imap_state_listsearch_resp(conn, imapcode, imapc->state);
-      break;
-
     case IMAP_LOGOUT:
       /* fallthrough, just stop! */
     default:
diff --git a/lib/ldap.c b/lib/ldap.c
index 66ec35236..95f2f1a98 100644
--- a/lib/ldap.c
+++ b/lib/ldap.c
@@ -119,6 +119,12 @@ static void _ldap_free_urldesc(LDAPURLDesc *ludp);
   #define LDAP_TRACE(x)   Curl_nop_stmt
 #endif
 
+#if defined(USE_WIN32_LDAP) && defined(ldap_err2string)
+/* Use ansi error strings in UNICODE builds */
+#undef ldap_err2string
+#define ldap_err2string ldap_err2stringA
+#endif
+
 
 static CURLcode Curl_ldap(struct connectdata *conn, bool *done);
 
@@ -838,10 +844,10 @@ static bool split_str(char *str, char ***out, size_t 
*count)
 static int _ldap_url_parse2(const struct connectdata *conn, LDAPURLDesc *ludp)
 {
   int rc = LDAP_SUCCESS;
-  char *path;
-  char *query;
   char *p;
-  char *q;
+  char *path;
+  char *q = NULL;
+  char *query = NULL;
   size_t i;
 
   if(!conn->data ||
@@ -859,11 +865,13 @@ static int _ldap_url_parse2(const struct connectdata 
*conn, LDAPURLDesc *ludp)
   if(!path)
     return LDAP_NO_MEMORY;
 
-  /* Duplicate the query */
-  q = query = strdup(conn->data->state.up.query);
-  if(!query) {
-    free(path);
-    return LDAP_NO_MEMORY;
+  /* Duplicate the query if present */
+  if(conn->data->state.up.query) {
+    q = query = strdup(conn->data->state.up.query);
+    if(!query) {
+      free(path);
+      return LDAP_NO_MEMORY;
+    }
   }
 
   /* Parse the DN (Distinguished Name) */
diff --git a/lib/mime.c b/lib/mime.c
index eb77cbddd..c974d195a 100644
--- a/lib/mime.c
+++ b/lib/mime.c
@@ -1135,6 +1135,8 @@ CURLcode Curl_mime_duppart(curl_mimepart *dst, const 
curl_mimepart *src)
   const curl_mimepart *s;
   CURLcode res = CURLE_OK;
 
+  DEBUGASSERT(dst);
+
   /* Duplicate content. */
   switch(src->kind) {
   case MIMEKIND_NONE:
@@ -1184,20 +1186,18 @@ CURLcode Curl_mime_duppart(curl_mimepart *dst, const 
curl_mimepart *src)
     }
   }
 
-  /* Duplicate other fields. */
-  if(dst != NULL)
+  if(!res) {
+    /* Duplicate other fields. */
     dst->encoder = src->encoder;
-  else
-    res = CURLE_WRITE_ERROR;
-  if(!res)
     res = curl_mime_type(dst, src->mimetype);
+  }
   if(!res)
     res = curl_mime_name(dst, src->name);
   if(!res)
     res = curl_mime_filename(dst, src->filename);
 
   /* If an error occurred, rollback. */
-  if(res && dst)
+  if(res)
     Curl_mime_cleanpart(dst);
 
   return res;
@@ -1901,4 +1901,11 @@ CURLcode curl_mime_headers(curl_mimepart *part,
   return CURLE_NOT_BUILT_IN;
 }
 
+CURLcode Curl_mime_add_header(struct curl_slist **slp, const char *fmt, ...)
+{
+  (void)slp;
+  (void)fmt;
+  return CURLE_NOT_BUILT_IN;
+}
+
 #endif /* if disabled */
diff --git a/lib/mime.h b/lib/mime.h
index 4c9a5fb71..3241fdc1f 100644
--- a/lib/mime.h
+++ b/lib/mime.h
@@ -127,7 +127,9 @@ struct curl_mimepart_s {
   mime_encoder_state encstate;     /* Data encoder state. */
 };
 
-#if (!defined(CURL_DISABLE_HTTP) && !defined(CURL_DISABLE_MIME)) || \
+CURLcode Curl_mime_add_header(struct curl_slist **slp, const char *fmt, ...);
+
+#if (!defined(CURL_DISABLE_HTTP) && !defined(CURL_DISABLE_MIME)) ||     \
   !defined(CURL_DISABLE_SMTP) || !defined(CURL_DISABLE_IMAP)
 
 /* Prototypes. */
@@ -144,7 +146,6 @@ curl_off_t Curl_mime_size(curl_mimepart *part);
 size_t Curl_mime_read(char *buffer, size_t size, size_t nitems,
                       void *instream);
 CURLcode Curl_mime_rewind(curl_mimepart *part);
-CURLcode Curl_mime_add_header(struct curl_slist **slp, const char *fmt, ...);
 const char *Curl_mime_contenttype(const char *filename);
 
 #else
@@ -157,7 +158,6 @@ const char *Curl_mime_contenttype(const char *filename);
 #define Curl_mime_size(x) (curl_off_t) -1
 #define Curl_mime_read NULL
 #define Curl_mime_rewind(x) ((void)x, CURLE_NOT_BUILT_IN)
-#define Curl_mime_add_header(x,y,...) CURLE_NOT_BUILT_IN
 #endif
 
 
diff --git a/lib/multi.c b/lib/multi.c
index 5fe6c58a4..4875afec5 100755
--- a/lib/multi.c
+++ b/lib/multi.c
@@ -363,7 +363,7 @@ struct Curl_multi *Curl_multi_handle(int hashsize, /* 
socket hash */
   Curl_llist_init(&multi->msglist, NULL);
   Curl_llist_init(&multi->pending, NULL);
 
-  multi->multiplexing = CURLPIPE_MULTIPLEX;
+  multi->multiplexing = TRUE;
 
   /* -1 means it not set by user, use the default value */
   multi->maxconnects = -1;
@@ -2772,6 +2772,16 @@ CURLMcode curl_multi_setopt(struct Curl_multi *multi,
     break;
   case CURLMOPT_PIPELINING_SERVER_BL:
     break;
+  case CURLMOPT_MAX_CONCURRENT_STREAMS:
+    {
+      long streams = va_arg(param, long);
+      if(streams < 1)
+        streams = 100;
+      multi->max_concurrent_streams =
+          (streams > (long)INITIAL_MAX_CONCURRENT_STREAMS)?
+          (long)INITIAL_MAX_CONCURRENT_STREAMS : streams;
+    }
+    break;
   default:
     res = CURLM_UNKNOWN_OPTION;
     break;
@@ -3210,3 +3220,9 @@ void Curl_multi_dump(struct Curl_multi *multi)
   }
 }
 #endif
+
+size_t Curl_multi_max_concurrent_streams(struct Curl_multi *multi)
+{
+  return multi ? ((size_t)multi->max_concurrent_streams ?
+                  (size_t)multi->max_concurrent_streams : 100) : 0;
+}
diff --git a/lib/multihandle.h b/lib/multihandle.h
index 279379ae0..b65bd9638 100644
--- a/lib/multihandle.h
+++ b/lib/multihandle.h
@@ -133,6 +133,7 @@ struct Curl_multi {
   struct curltime timer_lastcall; /* the fixed time for the timeout for the
                                     previous callback */
   bool in_callback;            /* true while executing a callback */
+  long max_concurrent_streams; /* max concurrent streams client to support */
 };
 
 #endif /* HEADER_CURL_MULTIHANDLE_H */
diff --git a/lib/multiif.h b/lib/multiif.h
index 0755a7cd2..75025232c 100644
--- a/lib/multiif.h
+++ b/lib/multiif.h
@@ -89,4 +89,10 @@ CURLMcode Curl_multi_add_perform(struct Curl_multi *multi,
                                  struct Curl_easy *data,
                                  struct connectdata *conn);
 
+
+/* Return the value of the CURLMOPT_MAX_CONCURRENT_STREAMS option
+ * If not specified or 0, default would be 100
+ */
+size_t Curl_multi_max_concurrent_streams(struct Curl_multi *multi);
+
 #endif /* HEADER_CURL_MULTIIF_H */
diff --git a/lib/netrc.c b/lib/netrc.c
index dcc702619..326a2c674 100644
--- a/lib/netrc.c
+++ b/lib/netrc.c
@@ -88,7 +88,7 @@ static int parsenetrc(const char *host,
       if(tok && *tok == '#')
         /* treat an initial hash as a comment line */
         continue;
-      while(!done && tok) {
+      while(tok) {
 
         if((login && *login) && (password && *password)) {
           done = TRUE;
diff --git a/lib/non-ascii.c b/lib/non-ascii.c
index ba3240551..2b5d307e5 100644
--- a/lib/non-ascii.c
+++ b/lib/non-ascii.c
@@ -5,7 +5,7 @@
  *                            | (__| |_| |  _ <| |___
  *                             \___|\___/|_| \_\_____|
  *
- * Copyright (C) 1998 - 2017, Daniel Stenberg, <address@hidden>, et al.
+ * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
  *
  * This software is licensed as described in the file COPYING, which
  * you should have received as part of this distribution. The terms
diff --git a/lib/parsedate.c b/lib/parsedate.c
index d069642ce..428101741 100644
--- a/lib/parsedate.c
+++ b/lib/parsedate.c
@@ -100,16 +100,20 @@ static int parsedate(const char *date, time_t *output);
 #define PARSEDATE_LATER  1
 #define PARSEDATE_SOONER 2
 
-#ifndef CURL_DISABLE_PARSEDATE
-
+#if !defined(CURL_DISABLE_PARSEDATE) || !defined(CURL_DISABLE_FTP) || \
+  !defined(CURL_DISABLE_FILE)
+/* These names are also used by FTP and FILE code */
 const char * const Curl_wkday[] =
 {"Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun"};
-static const char * const weekday[] =
-{ "Monday", "Tuesday", "Wednesday", "Thursday",
-  "Friday", "Saturday", "Sunday" };
 const char * const Curl_month[]=
 { "Jan", "Feb", "Mar", "Apr", "May", "Jun",
   "Jul", "Aug", "Sep", "Oct", "Nov", "Dec" };
+#endif
+
+#ifndef CURL_DISABLE_PARSEDATE
+static const char * const weekday[] =
+{ "Monday", "Tuesday", "Wednesday", "Thursday",
+  "Friday", "Saturday", "Sunday" };
 
 struct tzinfo {
   char name[5];
diff --git a/lib/security.c b/lib/security.c
index c5e4e135d..fbfa70741 100644
--- a/lib/security.c
+++ b/lib/security.c
@@ -236,7 +236,7 @@ static ssize_t sec_recv(struct connectdata *conn, int 
sockindex,
 
   /* Handle clear text response. */
   if(conn->sec_complete == 0 || conn->data_prot == PROT_CLEAR)
-      return read(fd, buffer, len);
+      return sread(fd, buffer, len);
 
   if(conn->in_buffer.eof_flag) {
     conn->in_buffer.eof_flag = 0;
diff --git a/lib/setopt.c b/lib/setopt.c
index 8909035a9..e18fa1149 100644
--- a/lib/setopt.c
+++ b/lib/setopt.c
@@ -315,7 +315,7 @@ CURLcode Curl_vsetopt(struct Curl_easy *data, CURLoption 
option, va_list param)
      * Parse the $HOME/.netrc file
      */
     arg = va_arg(param, long);
-    if((arg < CURL_NETRC_IGNORED) || (arg > CURL_NETRC_REQUIRED))
+    if((arg < CURL_NETRC_IGNORED) || (arg >= CURL_NETRC_LAST))
       return CURLE_BAD_FUNCTION_ARGUMENT;
     data->set.use_netrc = (enum CURL_NETRC_OPTION)arg;
     break;
@@ -339,10 +339,10 @@ CURLcode Curl_vsetopt(struct Curl_easy *data, CURLoption 
option, va_list param)
   case CURLOPT_TIMECONDITION:
     /*
      * Set HTTP time condition. This must be one of the defines in the
-     * curl/curl.h header file.
+     * gnurl/curl.h header file.
      */
     arg = va_arg(param, long);
-    if((arg < CURL_TIMECOND_NONE) || (arg > CURL_TIMECOND_LASTMOD))
+    if((arg < CURL_TIMECOND_NONE) || (arg >= CURL_TIMECOND_LAST))
       return CURLE_BAD_FUNCTION_ARGUMENT;
     data->set.timecondition = (curl_TimeCond)arg;
     break;
@@ -752,7 +752,7 @@ CURLcode Curl_vsetopt(struct Curl_easy *data, CURLoption 
option, va_list param)
     }
     else if(strcasecompare(argptr, "FLUSH")) {
       /* flush cookies to file, takes care of the locking */
-      Curl_flush_cookies(data, 0);
+      Curl_flush_cookies(data, FALSE);
     }
     else if(strcasecompare(argptr, "RELOAD")) {
       /* reload cookies from file */
@@ -804,7 +804,7 @@ CURLcode Curl_vsetopt(struct Curl_easy *data, CURLoption 
option, va_list param)
   case CURLOPT_HTTP_VERSION:
     /*
      * This sets a requested HTTP version to be used. The value is one of
-     * the listed enums in curl/curl.h.
+     * the listed enums in gnurl/curl.h.
      */
     arg = va_arg(param, long);
     if(arg < CURL_HTTP_VERSION_NONE)
@@ -818,7 +818,7 @@ CURLcode Curl_vsetopt(struct Curl_easy *data, CURLoption 
option, va_list param)
     if(arg >= CURL_HTTP_VERSION_2)
       return CURLE_UNSUPPORTED_PROTOCOL;
 #else
-    if(arg > CURL_HTTP_VERSION_2_PRIOR_KNOWLEDGE)
+    if(arg >= CURL_HTTP_VERSION_LAST)
       return CURLE_UNSUPPORTED_PROTOCOL;
     if(arg == CURL_HTTP_VERSION_NONE)
       arg = CURL_HTTP_VERSION_2TLS;
@@ -1109,7 +1109,7 @@ CURLcode Curl_vsetopt(struct Curl_easy *data, CURLoption 
option, va_list param)
      * How do access files over FTP.
      */
     arg = va_arg(param, long);
-    if((arg < CURLFTPMETHOD_DEFAULT) || (arg > CURLFTPMETHOD_SINGLECWD))
+    if((arg < CURLFTPMETHOD_DEFAULT) || (arg >= CURLFTPMETHOD_LAST))
       return CURLE_BAD_FUNCTION_ARGUMENT;
     data->set.ftp_filemethod = (curl_ftpfile)arg;
     break;
@@ -1136,7 +1136,7 @@ CURLcode Curl_vsetopt(struct Curl_easy *data, CURLoption 
option, va_list param)
 
   case CURLOPT_FTP_SSL_CCC:
     arg = va_arg(param, long);
-    if((arg < CURLFTPSSL_CCC_NONE) || (arg > CURLFTPSSL_CCC_ACTIVE))
+    if((arg < CURLFTPSSL_CCC_NONE) || (arg >= CURLFTPSSL_CCC_LAST))
       return CURLE_BAD_FUNCTION_ARGUMENT;
     data->set.ftp_ccc = (curl_ftpccc)arg;
     break;
@@ -1164,7 +1164,7 @@ CURLcode Curl_vsetopt(struct Curl_easy *data, CURLoption 
option, va_list param)
      * Set a specific auth for FTP-SSL transfers.
      */
     arg = va_arg(param, long);
-    if((arg < CURLFTPAUTH_DEFAULT) || (arg > CURLFTPAUTH_TLS))
+    if((arg < CURLFTPAUTH_DEFAULT) || (arg >= CURLFTPAUTH_LAST))
       return CURLE_BAD_FUNCTION_ARGUMENT;
     data->set.ftpsslauth = (curl_ftpauth)arg;
     break;
@@ -2123,7 +2123,7 @@ CURLcode Curl_vsetopt(struct Curl_easy *data, CURLoption 
option, va_list param)
      * Make transfers attempt to use SSL/TLS.
      */
     arg = va_arg(param, long);
-    if((arg < CURLUSESSL_NONE) || (arg > CURLUSESSL_ALL))
+    if((arg < CURLUSESSL_NONE) || (arg >= CURLUSESSL_LAST))
       return CURLE_BAD_FUNCTION_ARGUMENT;
     data->set.use_ssl = (curl_usessl)arg;
     break;
@@ -2500,7 +2500,7 @@ CURLcode Curl_vsetopt(struct Curl_easy *data, CURLoption 
option, va_list param)
 
   case CURLOPT_RTSP_SERVER_CSEQ:
     /* Same as the above, but for server-initiated requests */
-    data->state.rtsp_next_client_CSeq = va_arg(param, long);
+    data->state.rtsp_next_server_CSeq = va_arg(param, long);
     break;
 
   case CURLOPT_INTERLEAVEDATA:
@@ -2725,7 +2725,8 @@ CURLcode Curl_vsetopt(struct Curl_easy *data, CURLoption 
option, va_list param)
     result = Curl_setstropt(&data->set.str[STRING_ALTSVC], argptr);
     if(result)
       return result;
-    (void)Curl_altsvc_load(data->asi, argptr);
+    if(argptr)
+      (void)Curl_altsvc_load(data->asi, argptr);
     break;
   case CURLOPT_ALTSVC_CTRL:
     if(!data->asi) {
diff --git a/lib/setup-os400.h b/lib/setup-os400.h
index a3c2a7bdc..629fd94c4 100644
--- a/lib/setup-os400.h
+++ b/lib/setup-os400.h
@@ -7,7 +7,7 @@
  *                            | (__| |_| |  _ <| |___
  *                             \___|\___/|_| \_\_____|
  *
- * Copyright (C) 1998 - 2016, Daniel Stenberg, <address@hidden>, et al.
+ * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
  *
  * This software is licensed as described in the file COPYING, which
  * you should have received as part of this distribution. The terms
@@ -206,11 +206,15 @@ extern int Curl_os400_sendto(int sd, char *buffer, int 
buflen, int flags,
                              struct sockaddr * dstaddr, int addrlen);
 extern int Curl_os400_recvfrom(int sd, char *buffer, int buflen, int flags,
                                struct sockaddr *fromaddr, int *addrlen);
+extern int Curl_os400_getpeername(int sd, struct sockaddr *addr, int *addrlen);
+extern int Curl_os400_getsockname(int sd, struct sockaddr *addr, int *addrlen);
 
 #define connect                 Curl_os400_connect
 #define bind                    Curl_os400_bind
 #define sendto                  Curl_os400_sendto
 #define recvfrom                Curl_os400_recvfrom
+#define getpeername             Curl_os400_getpeername
+#define getsockname             Curl_os400_getsockname
 
 #ifdef HAVE_LIBZ
 #define zlibVersion             Curl_os400_zlibVersion
diff --git a/lib/smb.c b/lib/smb.c
index f66c05ca4..12f99257f 100644
--- a/lib/smb.c
+++ b/lib/smb.c
@@ -682,7 +682,8 @@ static CURLcode smb_connection_state(struct connectdata 
*conn, bool *done)
 
   switch(smbc->state) {
   case SMB_NEGOTIATE:
-    if(h->status || smbc->got < sizeof(*nrsp) + sizeof(smbc->challenge) - 1) {
+    if((smbc->got < sizeof(*nrsp) + sizeof(smbc->challenge) - 1) ||
+       h->status) {
       connclose(conn, "SMB: negotiation failed");
       return CURLE_COULDNT_CONNECT;
     }
diff --git a/lib/socketpair.c b/lib/socketpair.c
new file mode 100644
index 000000000..1f0e2e4a4
--- /dev/null
+++ b/lib/socketpair.c
@@ -0,0 +1,118 @@
+/***************************************************************************
+ *                                  _   _ ____  _
+ *  Project                     ___| | | |  _ \| |
+ *                             / __| | | | |_) | |
+ *                            | (__| |_| |  _ <| |___
+ *                             \___|\___/|_| \_\_____|
+ *
+ * Copyright (C) 2019, Daniel Stenberg, <address@hidden>, et al.
+ *
+ * This software is licensed as described in the file COPYING, which
+ * you should have received as part of this distribution. The terms
+ * are also available at https://curl.haxx.se/docs/copyright.html.
+ *
+ * You may opt to use, copy, modify, merge, publish, distribute and/or sell
+ * copies of the Software, and permit persons to whom the Software is
+ * furnished to do so, under the terms of the COPYING file.
+ *
+ * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY
+ * KIND, either express or implied.
+ *
+ ***************************************************************************/
+
+#include "curl_setup.h"
+#include "socketpair.h"
+
+#ifndef HAVE_SOCKETPAIR
+#ifdef WIN32
+/*
+ * This is a socketpair() implementation for Windows.
+ */
+#include <string.h>
+#include <winsock2.h>
+#include <ws2tcpip.h>
+#include <windows.h>
+#include <io.h>
+#else
+#ifdef HAVE_NETDB_H
+#include <netdb.h>
+#endif
+#ifdef HAVE_NETINET_IN_H
+#include <netinet/in.h> /* IPPROTO_TCP */
+#endif
+#ifndef INADDR_LOOPBACK
+#define INADDR_LOOPBACK 0x7f000001
+#endif /* !INADDR_LOOPBACK */
+#endif /* !WIN32 */
+
+/* The last 3 #include files should be in this order */
+#include "curl_printf.h"
+#include "curl_memory.h"
+#include "memdebug.h"
+
+int Curl_socketpair(int domain, int type, int protocol,
+                    curl_socket_t socks[2])
+{
+  union {
+    struct sockaddr_in inaddr;
+    struct sockaddr addr;
+  } a;
+  curl_socket_t listener;
+  curl_socklen_t addrlen = sizeof(a.inaddr);
+  int reuse = 1;
+  char data[2][12];
+  ssize_t dlen;
+  (void)domain;
+  (void)type;
+  (void)protocol;
+
+  listener = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
+  if(listener == CURL_SOCKET_BAD)
+    return -1;
+
+  memset(&a, 0, sizeof(a));
+  a.inaddr.sin_family = AF_INET;
+  a.inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
+  a.inaddr.sin_port = 0;
+
+  socks[0] = socks[1] = CURL_SOCKET_BAD;
+
+  if(setsockopt(listener, SOL_SOCKET, SO_REUSEADDR,
+                (char *)&reuse, (curl_socklen_t)sizeof(reuse)) == -1)
+    goto error;
+  if(bind(listener, &a.addr, sizeof(a.inaddr)) == -1)
+    goto error;
+  if(getsockname(listener, &a.addr, &addrlen) == -1)
+    goto error;
+  if(listen(listener, 1) == -1)
+    goto error;
+  socks[0] = socket(AF_INET, SOCK_STREAM, 0);
+  if(socks[0] == CURL_SOCKET_BAD)
+    goto error;
+  if(connect(socks[0], &a.addr, sizeof(a.inaddr)) == -1)
+    goto error;
+  socks[1] = accept(listener, NULL, NULL);
+  if(socks[1] == CURL_SOCKET_BAD)
+    goto error;
+
+  /* verify that nothing else connected */
+  msnprintf(data[0], sizeof(data[0]), "%p", socks);
+  dlen = strlen(data[0]);
+  if(swrite(socks[0], data[0], dlen) != dlen)
+    goto error;
+  if(sread(socks[1], data[1], sizeof(data[1])) != dlen)
+    goto error;
+  if(memcmp(data[0], data[1], dlen))
+    goto error;
+
+  sclose(listener);
+  return 0;
+
+  error:
+  sclose(listener);
+  sclose(socks[0]);
+  sclose(socks[1]);
+  return -1;
+}
+
+#endif /* ! HAVE_SOCKETPAIR */
diff --git a/lib/strtok.h b/lib/socketpair.h
similarity index 68%
copy from lib/strtok.h
copy to lib/socketpair.h
index 90b831eb6..be9fb24f9 100644
--- a/lib/strtok.h
+++ b/lib/socketpair.h
@@ -1,5 +1,5 @@
-#ifndef HEADER_CURL_STRTOK_H
-#define HEADER_CURL_STRTOK_H
+#ifndef HEADER_CURL_SOCKETPAIR_H
+#define HEADER_CURL_SOCKETPAIR_H
 /***************************************************************************
  *                                  _   _ ____  _
  *  Project                     ___| | | |  _ \| |
@@ -7,7 +7,7 @@
  *                            | (__| |_| |  _ <| |___
  *                             \___|\___/|_| \_\_____|
  *
- * Copyright (C) 1998 - 2010, Daniel Stenberg, <address@hidden>, et al.
+ * Copyright (C) 2019, Daniel Stenberg, <address@hidden>, et al.
  *
  * This software is licensed as described in the file COPYING, which
  * you should have received as part of this distribution. The terms
@@ -21,14 +21,16 @@
  * KIND, either express or implied.
  *
  ***************************************************************************/
-#include "curl_setup.h"
-#include <stddef.h>
 
-#ifndef HAVE_STRTOK_R
-char *Curl_strtok_r(char *s, const char *delim, char **last);
-#define strtok_r Curl_strtok_r
+#include "curl_setup.h"
+#ifndef HAVE_SOCKETPAIR
+int Curl_socketpair(int domain, int type, int protocol,
+                    curl_socket_t socks[2]);
 #else
-#include <string.h>
+#define Curl_socketpair(a,b,c,d) socketpair(a,b,c,d)
 #endif
 
-#endif /* HEADER_CURL_STRTOK_H */
+/* Defined here to allow specific build configs to disable it completely */
+#define USE_SOCKETPAIR 1
+
+#endif /* HEADER_CURL_SOCKETPAIR_H */
diff --git a/lib/socks.c b/lib/socks.c
index d8fcc3bbb..6ae98184d 100644
--- a/lib/socks.c
+++ b/lib/socks.c
@@ -38,7 +38,9 @@
 #include "timeval.h"
 #include "socks.h"
 
-/* The last #include file should be: */
+/* The last 3 #include files should be in this order */
+#include "curl_printf.h"
+#include "curl_memory.h"
 #include "memdebug.h"
 
 /*
@@ -372,8 +374,9 @@ CURLcode Curl_SOCKS5(const char *proxy_user,
     o  REP    Reply field:
     o  X'00' succeeded
   */
-
-  unsigned char socksreq[600]; /* room for large user/pw (255 max each) */
+#define REQUEST_BUFSIZE 600  /* room for large user/pw (255 max each) */
+  unsigned char socksreq[REQUEST_BUFSIZE];
+  char dest[REQUEST_BUFSIZE] = "unknown";  /* printable hostname:port */
   int idx;
   ssize_t actualread;
   ssize_t written;
@@ -605,6 +608,8 @@ CURLcode Curl_SOCKS5(const char *proxy_user,
     socksreq[len++] = (char) hostname_len; /* address length */
     memcpy(&socksreq[len], hostname, hostname_len); /* address str w/o NULL */
     len += hostname_len;
+    msnprintf(dest, sizeof(dest), "%s:%d", hostname, remote_port);
+    infof(data, "SOCKS5 connect to %s (remotely resolved)\n", dest);
   }
   else {
     struct Curl_dns_entry *dns;
@@ -628,8 +633,13 @@ CURLcode Curl_SOCKS5(const char *proxy_user,
     if(dns)
       hp = dns->addr;
     if(hp) {
-      char buf[64];
-      Curl_printable_address(hp, buf, sizeof(buf));
+      if(Curl_printable_address(hp, dest, sizeof(dest))) {
+        size_t destlen = strlen(dest);
+        msnprintf(dest + destlen, sizeof(dest) - destlen, ":%d", remote_port);
+      }
+      else {
+        strcpy(dest, "unknown");
+      }
 
       if(hp->ai_family == AF_INET) {
         int i;
@@ -641,7 +651,7 @@ CURLcode Curl_SOCKS5(const char *proxy_user,
           socksreq[len++] = ((unsigned char *)&saddr_in->sin_addr.s_addr)[i];
         }
 
-        infof(data, "SOCKS5 connect to IPv4 %s (locally resolved)\n", buf);
+        infof(data, "SOCKS5 connect to IPv4 %s (locally resolved)\n", dest);
       }
 #ifdef ENABLE_IPV6
       else if(hp->ai_family == AF_INET6) {
@@ -655,13 +665,13 @@ CURLcode Curl_SOCKS5(const char *proxy_user,
             ((unsigned char *)&saddr_in6->sin6_addr.s6_addr)[i];
         }
 
-        infof(data, "SOCKS5 connect to IPv6 %s (locally resolved)\n", buf);
+        infof(data, "SOCKS5 connect to IPv6 %s (locally resolved)\n", dest);
       }
 #endif
       else {
         hp = NULL; /* fail! */
 
-        failf(data, "SOCKS5 connection to %s not supported\n", buf);
+        failf(data, "SOCKS5 connection to %s not supported\n", dest);
       }
 
       Curl_resolv_unlock(data, dns); /* not used anymore from now on */
@@ -756,42 +766,8 @@ CURLcode Curl_SOCKS5(const char *proxy_user,
 #endif
 
   if(socksreq[1] != 0) { /* Anything besides 0 is an error */
-    if(socksreq[3] == 1) {
-      failf(data,
-            "Can't complete SOCKS5 connection to %d.%d.%d.%d:%d. (%d)",
-            (unsigned char)socksreq[4], (unsigned char)socksreq[5],
-            (unsigned char)socksreq[6], (unsigned char)socksreq[7],
-            (((unsigned char)socksreq[8] << 8) |
-             (unsigned char)socksreq[9]),
-            (unsigned char)socksreq[1]);
-    }
-    else if(socksreq[3] == 3) {
-      unsigned char port_upper = (unsigned char)socksreq[len - 2];
-      socksreq[len - 2] = 0;
-      failf(data,
-            "Can't complete SOCKS5 connection to %s:%d. (%d)",
-            (char *)&socksreq[5],
-            ((port_upper << 8) |
-             (unsigned char)socksreq[len - 1]),
-            (unsigned char)socksreq[1]);
-      socksreq[len - 2] = port_upper;
-    }
-    else if(socksreq[3] == 4) {
-      failf(data,
-            "Can't complete SOCKS5 connection to %02x%02x:%02x%02x:"
-            "%02x%02x:%02x%02x:%02x%02x:%02x%02x:%02x%02x:%02x%02x:%d. (%d)",
-            (unsigned char)socksreq[4], (unsigned char)socksreq[5],
-            (unsigned char)socksreq[6], (unsigned char)socksreq[7],
-            (unsigned char)socksreq[8], (unsigned char)socksreq[9],
-            (unsigned char)socksreq[10], (unsigned char)socksreq[11],
-            (unsigned char)socksreq[12], (unsigned char)socksreq[13],
-            (unsigned char)socksreq[14], (unsigned char)socksreq[15],
-            (unsigned char)socksreq[16], (unsigned char)socksreq[17],
-            (unsigned char)socksreq[18], (unsigned char)socksreq[19],
-            (((unsigned char)socksreq[20] << 8) |
-             (unsigned char)socksreq[21]),
-            (unsigned char)socksreq[1]);
-    }
+    failf(data, "Can't complete SOCKS5 connection to %s. (%d)",
+          dest, (unsigned char)socksreq[1]);
     return CURLE_COULDNT_CONNECT;
   }
   infof(data, "SOCKS5 request granted.\n");
diff --git a/lib/strcase.c b/lib/strcase.c
index c6732ff78..c286df26b 100644
--- a/lib/strcase.c
+++ b/lib/strcase.c
@@ -5,7 +5,7 @@
  *                            | (__| |_| |  _ <| |___
  *                             \___|\___/|_| \_\_____|
  *
- * Copyright (C) 1998 - 2017, Daniel Stenberg, <address@hidden>, et al.
+ * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
  *
  * This software is licensed as described in the file COPYING, which
  * you should have received as part of this distribution. The terms
@@ -93,6 +93,75 @@ char Curl_raw_toupper(char in)
   return in;
 }
 
+
+/* Portable, consistent tolower (remember EBCDIC). Do not use tolower() because
+   its behavior is altered by the current locale. */
+char Curl_raw_tolower(char in)
+{
+#if !defined(CURL_DOES_CONVERSIONS)
+  if(in >= 'A' && in <= 'Z')
+    return (char)('a' + in - 'A');
+#else
+  switch(in) {
+  case 'A':
+    return 'a';
+  case 'B':
+    return 'b';
+  case 'C':
+    return 'c';
+  case 'D':
+    return 'd';
+  case 'E':
+    return 'e';
+  case 'F':
+    return 'f';
+  case 'G':
+    return 'g';
+  case 'H':
+    return 'h';
+  case 'I':
+    return 'i';
+  case 'J':
+    return 'j';
+  case 'K':
+    return 'k';
+  case 'L':
+    return 'l';
+  case 'M':
+    return 'm';
+  case 'N':
+    return 'n';
+  case 'O':
+    return 'o';
+  case 'P':
+    return 'p';
+  case 'Q':
+    return 'q';
+  case 'R':
+    return 'r';
+  case 'S':
+    return 's';
+  case 'T':
+    return 't';
+  case 'U':
+    return 'u';
+  case 'V':
+    return 'v';
+  case 'W':
+    return 'w';
+  case 'X':
+    return 'x';
+  case 'Y':
+    return 'y';
+  case 'Z':
+    return 'z';
+  }
+#endif
+
+  return in;
+}
+
+
 /*
  * Curl_strcasecompare() is for doing "raw" case insensitive strings. This is
  * meant to be locale independent and only compare strings we know are safe
@@ -165,6 +234,21 @@ void Curl_strntoupper(char *dest, const char *src, size_t 
n)
   } while(*src++ && --n);
 }
 
+/* Copy a lower case version of the string from src to dest.  The
+ * strings may overlap.  No more than n characters of the string are copied
+ * (including any NUL) and the destination string will NOT be
+ * NUL-terminated if that limit is reached.
+ */
+void Curl_strntolower(char *dest, const char *src, size_t n)
+{
+  if(n < 1)
+    return;
+
+  do {
+    *dest++ = Curl_raw_tolower(*src);
+  } while(*src++ && --n);
+}
+
 /* --- public functions --- */
 
 int curl_strequal(const char *first, const char *second)
diff --git a/lib/strcase.h b/lib/strcase.h
index 6436e6937..db9a8aff2 100644
--- a/lib/strcase.h
+++ b/lib/strcase.h
@@ -7,7 +7,7 @@
  *                            | (__| |_| |  _ <| |___
  *                             \___|\___/|_| \_\_____|
  *
- * Copyright (C) 1998 - 2016, Daniel Stenberg, <address@hidden>, et al.
+ * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
  *
  * This software is licensed as described in the file COPYING, which
  * you should have received as part of this distribution. The terms
@@ -40,11 +40,13 @@ int Curl_safe_strcasecompare(const char *first, const char 
*second);
 int Curl_strncasecompare(const char *first, const char *second, size_t max);
 
 char Curl_raw_toupper(char in);
+char Curl_raw_tolower(char in);
 
 /* checkprefix() is a shorter version of the above, used when the first
    argument is zero-byte terminated */
 #define checkprefix(a,b)    curl_strnequal(a,b,strlen(a))
 
 void Curl_strntoupper(char *dest, const char *src, size_t n);
+void Curl_strntolower(char *dest, const char *src, size_t n);
 
 #endif /* HEADER_CURL_STRCASE_H */
diff --git a/lib/transfer.c b/lib/transfer.c
index e5e74711d..9996d15ce 100644
--- a/lib/transfer.c
+++ b/lib/transfer.c
@@ -776,14 +776,14 @@ static CURLcode readwrite_data(struct Curl_easy *data,
          * and writes away the data. The returned 'nread' holds the number
          * of actual data it wrote to the client.
          */
-
+        CURLcode extra;
         CHUNKcode res =
-          Curl_httpchunk_read(conn, k->str, nread, &nread);
+          Curl_httpchunk_read(conn, k->str, nread, &nread, &extra);
 
         if(CHUNKE_OK < res) {
-          if(CHUNKE_WRITE_ERROR == res) {
-            failf(data, "Failed writing data");
-            return CURLE_WRITE_ERROR;
+          if(CHUNKE_PASSTHRU_ERROR == res) {
+            failf(data, "Failed reading the chunked-encoded stream");
+            return extra;
           }
           failf(data, "%s in chunked-encoding", Curl_chunked_strerror(res));
           return CURLE_RECV_ERROR;
@@ -1510,6 +1510,7 @@ CURLcode Curl_pretransfer(struct Curl_easy *data)
       }
     }
 #endif
+    Curl_http2_init_state(&data->state);
   }
 
   return result;
@@ -1591,7 +1592,8 @@ CURLcode Curl_follow(struct Curl_easy *data,
 
   DEBUGASSERT(data->state.uh);
   uc = curl_url_set(data->state.uh, CURLUPART_URL, newurl,
-                    (type == FOLLOW_FAKE) ? CURLU_NON_SUPPORT_SCHEME : 0);
+                    (type == FOLLOW_FAKE) ? CURLU_NON_SUPPORT_SCHEME :
+                    ((type == FOLLOW_REDIR) ? CURLU_URLENCODE : 0) );
   if(uc) {
     if(type != FOLLOW_FAKE)
       return Curl_uc_to_curlcode(uc);
diff --git a/lib/url.c b/lib/url.c
index b7cf7bedd..8285474fd 100644
--- a/lib/url.c
+++ b/lib/url.c
@@ -317,13 +317,17 @@ static void up_free(struct Curl_easy *data)
  * when curl_easy_perform() is invoked.
  */
 
-CURLcode Curl_close(struct Curl_easy *data)
+CURLcode Curl_close(struct Curl_easy **datap)
 {
   struct Curl_multi *m;
+  struct Curl_easy *data;
 
-  if(!data)
+  if(!datap || !*datap)
     return CURLE_OK;
 
+  data = *datap;
+  *datap = NULL;
+
   Curl_expire_clear(data); /* shut off timers */
 
   m = data->multi;
@@ -374,7 +378,7 @@ CURLcode Curl_close(struct Curl_easy *data)
   Curl_safefree(data->state.buffer);
   Curl_safefree(data->state.headerbuff);
   Curl_safefree(data->state.ulbuf);
-  Curl_flush_cookies(data, 1);
+  Curl_flush_cookies(data, TRUE);
 #ifdef USE_ALTSVC
   Curl_altsvc_save(data->asi, data->set.str[STRING_ALTSVC]);
   Curl_altsvc_cleanup(data->asi);
@@ -399,6 +403,10 @@ CURLcode Curl_close(struct Curl_easy *data)
     Curl_share_unlock(data, CURL_LOCK_DATA_SHARE);
   }
 
+  free(data->req.doh.probe[0].serverdoh.memory);
+  free(data->req.doh.probe[1].serverdoh.memory);
+  curl_slist_free_all(data->req.doh.headers);
+
   /* destruct wildcard structures if it is needed */
   Curl_wildcard_dtor(&data->wildcard);
   Curl_freeset(data);
@@ -612,8 +620,6 @@ CURLcode Curl_open(struct Curl_easy **curl)
 
       data->progress.flags |= PGRS_HIDE;
       data->state.current_speed = -1; /* init to negative == impossible */
-
-      Curl_http2_init_state(&data->state);
     }
   }
 
@@ -1041,7 +1047,7 @@ ConnectionExists(struct Curl_easy *data,
     /* We can't multiplex if we don't know anything about the server */
     if(canmultiplex) {
       if(bundle->multiuse == BUNDLE_UNKNOWN) {
-        if((bundle->multiuse == BUNDLE_UNKNOWN) && data->set.pipewait) {
+        if(data->set.pipewait) {
           infof(data, "Server doesn't support multiplex yet, wait\n");
           *waitpipe = TRUE;
           Curl_conncache_unlock(data);
@@ -1277,8 +1283,14 @@ ConnectionExists(struct Curl_easy *data,
            partway through a handshake!) */
         if(wantNTLMhttp) {
           if(strcmp(needle->user, check->user) ||
-             strcmp(needle->passwd, check->passwd))
+             strcmp(needle->passwd, check->passwd)) {
+
+            /* we prefer a credential match, but this is at least a connection
+               that can be reused and "upgraded" to NTLM */
+            if(check->http_ntlm_state == NTLMSTATE_NONE)
+              chosen = check;
             continue;
+          }
         }
         else if(check->http_ntlm_state != NTLMSTATE_NONE) {
           /* Connection is using NTLM auth but we don't want NTLM */
@@ -1787,6 +1799,7 @@ static CURLcode parseurlandfillconn(struct Curl_easy 
*data,
   }
 
   if(!data->set.uh) {
+    char *newurl;
     uc = curl_url_set(uh, CURLUPART_URL, data->change.url,
                     CURLU_GUESS_SCHEME |
                     CURLU_NON_SUPPORT_SCHEME |
@@ -1797,6 +1810,15 @@ static CURLcode parseurlandfillconn(struct Curl_easy 
*data,
       DEBUGF(infof(data, "curl_url_set rejected %s\n", data->change.url));
       return Curl_uc_to_curlcode(uc);
     }
+
+    /* after it was parsed, get the generated normalized version */
+    uc = curl_url_get(uh, CURLUPART_URL, &newurl, 0);
+    if(uc)
+      return Curl_uc_to_curlcode(uc);
+    if(data->change.url_alloc)
+      free(data->change.url);
+    data->change.url = newurl;
+    data->change.url_alloc = TRUE;
   }
 
   uc = curl_url_get(uh, CURLUPART_SCHEME, &data->state.up.scheme, 0);
@@ -1863,11 +1885,7 @@ static CURLcode parseurlandfillconn(struct Curl_easy 
*data,
   (void)curl_url_get(uh, CURLUPART_QUERY, &data->state.up.query, 0);
 
   hostname = data->state.up.hostname;
-  if(!hostname)
-    /* this is for file:// transfers, get a dummy made */
-    hostname = (char *)"";
-
-  if(hostname[0] == '[') {
+  if(hostname && hostname[0] == '[') {
     /* This looks like an IPv6 address literal. See if there is an address
        scope. */
     size_t hlen;
@@ -1881,7 +1899,7 @@ static CURLcode parseurlandfillconn(struct Curl_easy 
*data,
   }
 
   /* make sure the connect struct gets its own copy of the host name */
-  conn->host.rawalloc = strdup(hostname);
+  conn->host.rawalloc = strdup(hostname ? hostname : "");
   if(!conn->host.rawalloc)
     return CURLE_OUT_OF_MEMORY;
   conn->host.name = conn->host.rawalloc;
@@ -1969,6 +1987,8 @@ void Curl_free_request_state(struct Curl_easy *data)
 {
   Curl_safefree(data->req.protop);
   Curl_safefree(data->req.newurl);
+  Curl_close(&data->req.doh.probe[0].easy);
+  Curl_close(&data->req.doh.probe[1].easy);
 }
 
 
@@ -2754,13 +2774,6 @@ static CURLcode set_login(struct connectdata *conn)
       result = CURLE_OUT_OF_MEMORY;
   }
 
-  /* if there's a user without password, consider password blank */
-  if(conn->user && !conn->passwd) {
-    conn->passwd = strdup("");
-    if(!conn->passwd)
-      result = CURLE_OUT_OF_MEMORY;
-  }
-
   return result;
 }
 
@@ -3519,6 +3532,10 @@ static CURLcode create_conn(struct Curl_easy *data,
     data->set.str[STRING_SSL_CIPHER13_LIST_ORIG];
   data->set.proxy_ssl.primary.cipher_list13 =
     data->set.str[STRING_SSL_CIPHER13_LIST_PROXY];
+  data->set.ssl.primary.pinned_key =
+    data->set.str[STRING_SSL_PINNEDPUBLICKEY_ORIG];
+  data->set.proxy_ssl.primary.pinned_key =
+    data->set.str[STRING_SSL_PINNEDPUBLICKEY_PROXY];
 
   data->set.ssl.CRLfile = data->set.str[STRING_SSL_CRLFILE_ORIG];
   data->set.proxy_ssl.CRLfile = data->set.str[STRING_SSL_CRLFILE_PROXY];
@@ -3815,7 +3832,9 @@ CURLcode Curl_setup_conn(struct connectdata *conn,
   }
   else {
     Curl_pgrsTime(data, TIMER_CONNECT);    /* we're connected already */
-    Curl_pgrsTime(data, TIMER_APPCONNECT); /* we're connected already */
+    if(conn->ssl[FIRSTSOCKET].use ||
+       (conn->handler->protocol & PROTO_FAMILY_SSH))
+      Curl_pgrsTime(data, TIMER_APPCONNECT); /* we're connected already */
     conn->bits.tcpconnect[FIRSTSOCKET] = TRUE;
     *protocol_done = TRUE;
     Curl_updateconninfo(conn, conn->sock[FIRSTSOCKET]);
diff --git a/lib/url.h b/lib/url.h
index f4d611add..053fbdffc 100644
--- a/lib/url.h
+++ b/lib/url.h
@@ -49,7 +49,7 @@ CURLcode Curl_init_userdefined(struct Curl_easy *data);
 
 void Curl_freeset(struct Curl_easy * data);
 CURLcode Curl_uc_to_curlcode(CURLUcode uc);
-CURLcode Curl_close(struct Curl_easy *data); /* opposite of curl_open() */
+CURLcode Curl_close(struct Curl_easy **datap); /* opposite of curl_open() */
 CURLcode Curl_connect(struct Curl_easy *, bool *async, bool *protocol_connect);
 CURLcode Curl_disconnect(struct Curl_easy *data,
                          struct connectdata *, bool dead_connection);
diff --git a/lib/urlapi.c b/lib/urlapi.c
index a0ee331da..fa514bce5 100644
--- a/lib/urlapi.c
+++ b/lib/urlapi.c
@@ -64,6 +64,7 @@ struct Curl_URL {
   char *fragment;
 
   char *scratch; /* temporary scratch area */
+  char *temppath; /* temporary path pointer */
   long portnum; /* the numerical version */
 };
 
@@ -82,6 +83,7 @@ static void free_urlhandle(struct Curl_URL *u)
   free(u->query);
   free(u->fragment);
   free(u->scratch);
+  free(u->temppath);
 }
 
 /* move the full contents of one handle onto another and
@@ -351,7 +353,7 @@ static char *concat_url(const char *base, const char 
*relurl)
   else {
     /* We got a new absolute path for this server */
 
-    if((relurl[0] == '/') && (relurl[1] == '/')) {
+    if(relurl[1] == '/') {
       /* the new URL starts with //, just keep the protocol part from the
          original one */
       *protsep = 0;
@@ -596,8 +598,12 @@ static CURLUcode hostname_check(struct Curl_URL *u, char 
*hostname)
   size_t hlen = strlen(hostname);
 
   if(hostname[0] == '[') {
+#ifdef ENABLE_IPV6
     char dest[16]; /* fits a binary IPv6 address */
+#endif
     const char *l = "0123456789abcdefABCDEF:.";
+    if(hlen < 5) /* '[::1]' is the shortest possible valid string */
+      return CURLUE_MALFORMED_INPUT;
     hostname++;
     hlen -= 2;
 
@@ -784,6 +790,7 @@ static CURLUcode seturl(const char *url, CURLU *u, unsigned 
int flags)
 
       if(junkscan(schemep))
         return CURLUE_MALFORMED_INPUT;
+
     }
     else {
       /* no scheme! */
@@ -804,11 +811,14 @@ static CURLUcode seturl(const char *url, CURLU *u, 
unsigned int flags)
       p++;
 
     len = p - hostp;
-    if(!len)
-      return CURLUE_MALFORMED_INPUT;
-
-    memcpy(hostname, hostp, len);
-    hostname[len] = 0;
+    if(len) {
+      memcpy(hostname, hostp, len);
+      hostname[len] = 0;
+    }
+    else {
+      if(!(flags & CURLU_NO_AUTHORITY))
+        return CURLUE_MALFORMED_INPUT;
+    }
 
     if((flags & CURLU_GUESS_SCHEME) && !schemep) {
       /* legacy curl-style guess based on host name */
@@ -843,35 +853,60 @@ static CURLUcode seturl(const char *url, CURLU *u, 
unsigned int flags)
   if(junkscan(path))
     return CURLUE_MALFORMED_INPUT;
 
-  query = strchr(path, '?');
-  if(query)
-    *query++ = 0;
+  if((flags & CURLU_URLENCODE) && path[0]) {
+    /* worst case output length is 3x the original! */
+    char *newp = malloc(strlen(path) * 3);
+    if(!newp)
+      return CURLUE_OUT_OF_MEMORY;
+    path_alloced = TRUE;
+    strcpy_url(newp, path, TRUE); /* consider it relative */
+    u->temppath = path = newp;
+  }
 
-  fragment = strchr(query?query:path, '#');
-  if(fragment)
+  fragment = strchr(path, '#');
+  if(fragment) {
     *fragment++ = 0;
+    if(fragment[0]) {
+      u->fragment = strdup(fragment);
+      if(!u->fragment)
+        return CURLUE_OUT_OF_MEMORY;
+    }
+  }
+
+  query = strchr(path, '?');
+  if(query) {
+    *query++ = 0;
+    /* done even if the query part is a blank string */
+    u->query = strdup(query);
+    if(!u->query)
+      return CURLUE_OUT_OF_MEMORY;
+  }
 
   if(!path[0])
-    /* if there's no path set, unset */
+    /* if there's no path left set, unset */
     path = NULL;
-  else if(!(flags & CURLU_PATH_AS_IS)) {
-    /* sanitise paths and remove ../ and ./ sequences according to RFC3986 */
-    char *newp = Curl_dedotdotify(path);
-    if(!newp)
-      return CURLUE_OUT_OF_MEMORY;
+  else {
+    if(!(flags & CURLU_PATH_AS_IS)) {
+      /* remove ../ and ./ sequences according to RFC3986 */
+      char *newp = Curl_dedotdotify(path);
+      if(!newp)
+        return CURLUE_OUT_OF_MEMORY;
 
-    if(strcmp(newp, path)) {
-      /* if we got a new version */
-      path = newp;
-      path_alloced = TRUE;
+      if(strcmp(newp, path)) {
+        /* if we got a new version */
+        if(path_alloced)
+          Curl_safefree(u->temppath);
+        u->temppath = path = newp;
+        path_alloced = TRUE;
+      }
+      else
+        free(newp);
     }
-    else
-      free(newp);
-  }
-  if(path) {
+
     u->path = path_alloced?path:strdup(path);
     if(!u->path)
       return CURLUE_OUT_OF_MEMORY;
+    u->temppath = NULL; /* used now */
   }
 
   if(hostname) {
@@ -889,28 +924,22 @@ static CURLUcode seturl(const char *url, CURLU *u, 
unsigned int flags)
     if(result)
       return result;
 
-    result = hostname_check(u, hostname);
-    if(result)
-      return result;
+    if(0 == strlen(hostname) && (flags & CURLU_NO_AUTHORITY)) {
+      /* Skip hostname check, it's allowed to be empty. */
+    }
+    else {
+      result = hostname_check(u, hostname);
+      if(result)
+        return result;
+    }
 
     u->host = strdup(hostname);
     if(!u->host)
       return CURLUE_OUT_OF_MEMORY;
   }
 
-  if(query) {
-    u->query = strdup(query);
-    if(!u->query)
-      return CURLUE_OUT_OF_MEMORY;
-  }
-  if(fragment && fragment[0]) {
-    u->fragment = strdup(fragment);
-    if(!u->fragment)
-      return CURLUE_OUT_OF_MEMORY;
-  }
-
-  free(u->scratch);
-  u->scratch = NULL;
+  Curl_safefree(u->scratch);
+  Curl_safefree(u->temppath);
 
   return CURLUE_OK;
 }
@@ -1075,24 +1104,23 @@ CURLUcode curl_url_get(CURLU *u, CURLUPart what,
       else
         return CURLUE_NO_SCHEME;
 
-      if(scheme) {
-        h = Curl_builtin_scheme(scheme);
-        if(!port && (flags & CURLU_DEFAULT_PORT)) {
-          /* there's no stored port number, but asked to deliver
-             a default one for the scheme */
-          if(h) {
-            msnprintf(portbuf, sizeof(portbuf), "%ld", h->defport);
-            port = portbuf;
-          }
-        }
-        else if(port) {
-          /* there is a stored port number, but asked to inhibit if it matches
-             the default one for the scheme */
-          if(h && (h->defport == u->portnum) &&
-             (flags & CURLU_NO_DEFAULT_PORT))
-            port = NULL;
+      h = Curl_builtin_scheme(scheme);
+      if(!port && (flags & CURLU_DEFAULT_PORT)) {
+        /* there's no stored port number, but asked to deliver
+           a default one for the scheme */
+        if(h) {
+          msnprintf(portbuf, sizeof(portbuf), "%ld", h->defport);
+          port = portbuf;
         }
       }
+      else if(port) {
+        /* there is a stored port number, but asked to inhibit if it matches
+           the default one for the scheme */
+        if(h && (h->defport == u->portnum) &&
+           (flags & CURLU_NO_DEFAULT_PORT))
+          port = NULL;
+      }
+
       if(h && !(h->flags & PROTOPT_URLOPTIONS))
         options = NULL;
 
@@ -1340,7 +1368,8 @@ CURLUcode curl_url_set(CURLU *u, CURLUPart what,
   default:
     return CURLUE_UNKNOWN_PART;
   }
-  if(storep) {
+  DEBUGASSERT(storep);
+  {
     const char *newp = part;
     size_t nalloc = strlen(part);
 
@@ -1432,9 +1461,14 @@ CURLUcode curl_url_set(CURLU *u, CURLUPart what,
     }
 
     if(what == CURLUPART_HOST) {
-      if(hostname_check(u, (char *)newp)) {
-        free((char *)newp);
-        return CURLUE_MALFORMED_INPUT;
+      if(0 == strlen(newp) && (flags & CURLU_NO_AUTHORITY)) {
+        /* Skip hostname check, it's allowed to be empty. */
+      }
+      else {
+        if(hostname_check(u, (char *)newp)) {
+          free((char *)newp);
+          return CURLUE_MALFORMED_INPUT;
+        }
       }
     }
 
diff --git a/lib/urldata.h b/lib/urldata.h
index 2c7c1fb4e..b6f59313d 100644
--- a/lib/urldata.h
+++ b/lib/urldata.h
@@ -68,6 +68,7 @@
 #define PROTO_FAMILY_POP3 (CURLPROTO_POP3|CURLPROTO_POP3S)
 #define PROTO_FAMILY_SMB  (CURLPROTO_SMB|CURLPROTO_SMBS)
 #define PROTO_FAMILY_SMTP (CURLPROTO_SMTP|CURLPROTO_SMTPS)
+#define PROTO_FAMILY_SSH  (CURLPROTO_SCP|CURLPROTO_SFTP)
 
 #define DEFAULT_CONNCACHE_SIZE 5
 
@@ -158,7 +159,13 @@ typedef ssize_t (Curl_recv)(struct connectdata *conn, /* 
connection data */
   ((x) && ((x)->magic == CURLEASY_MAGIC_NUMBER))
 
 /* the type we use for storing a single boolean bit */
+#ifdef _MSC_VER
+typedef bool bit;
+#define BIT(x) bool x
+#else
 typedef unsigned int bit;
+#define BIT(x) bit x:1
+#endif
 
 #ifdef HAVE_GSSAPI
 /* Types needed for krb5-ftp connections */
@@ -166,7 +173,7 @@ struct krb5buffer {
   void *data;
   size_t size;
   size_t index;
-  bit eof_flag:1;
+  BIT(eof_flag);
 };
 
 enum protection_level {
@@ -210,7 +217,7 @@ struct ssl_connect_data {
 #if defined(USE_SSL)
   struct ssl_backend_data *backend;
 #endif
-  bit use:1;
+  BIT(use);
 };
 
 struct ssl_primary_config {
@@ -223,10 +230,11 @@ struct ssl_primary_config {
   char *egdsocket;       /* path to file containing the EGD daemon socket */
   char *cipher_list;     /* list of ciphers to use */
   char *cipher_list13;   /* list of TLS 1.3 cipher suites to use */
-  bit verifypeer:1;      /* set TRUE if this is desired */
-  bit verifyhost:1;      /* set TRUE if CN/SAN must match hostname */
-  bit verifystatus:1;    /* set TRUE if certificate status must be checked */
-  bit sessionid:1;       /* cache session IDs or not */
+  char *pinned_key;
+  BIT(verifypeer);       /* set TRUE if this is desired */
+  BIT(verifyhost);       /* set TRUE if CN/SAN must match hostname */
+  BIT(verifystatus);     /* set TRUE if certificate status must be checked */
+  BIT(sessionid);        /* cache session IDs or not */
 };
 
 struct ssl_config_data {
@@ -246,10 +254,10 @@ struct ssl_config_data {
   char *password; /* TLS password (for, e.g., SRP) */
   enum CURL_TLSAUTH authtype; /* TLS authentication type (default SRP) */
 #endif
-  bit certinfo:1;     /* gather lots of certificate info */
-  bit falsestart:1;
-  bit enable_beast:1; /* allow this flaw for interoperability's sake*/
-  bit no_revoke:1;    /* disable SSL certificate revocation checks */
+  BIT(certinfo);     /* gather lots of certificate info */
+  BIT(falsestart);
+  BIT(enable_beast); /* allow this flaw for interoperability's sake*/
+  BIT(no_revoke);    /* disable SSL certificate revocation checks */
 };
 
 struct ssl_general_config {
@@ -292,8 +300,8 @@ struct digestdata {
   char *qop;
   char *algorithm;
   int nc; /* nounce count */
-  bit stale:1; /* set true for re-negotiation */
-  bit userhash:1;
+  BIT(stale); /* set true for re-negotiation */
+  BIT(userhash);
 #endif
 };
 
@@ -387,10 +395,10 @@ struct negotiatedata {
   size_t output_token_length;
 #endif
 #endif
-  bool noauthpersist;
-  bool havenoauthpersist;
-  bool havenegdata;
-  bool havemultiplerequests;
+  BIT(noauthpersist);
+  BIT(havenoauthpersist);
+  BIT(havenegdata);
+  BIT(havemultiplerequests);
 };
 #endif
 
@@ -404,64 +412,64 @@ struct ConnectBits {
                                   is complete */
   bool tcpconnect[2]; /* the TCP layer (or similar) is connected, this is set
                          the first time on the first connect function call */
-  bit close:1; /* if set, we close the connection after this request */
-  bit reuse:1; /* if set, this is a re-used connection */
-  bit altused:1; /* this is an alt-svc "redirect" */
-  bit conn_to_host:1; /* if set, this connection has a "connect to host"
-                         that overrides the host in the URL */
-  bit conn_to_port:1; /* if set, this connection has a "connect to port"
-                         that overrides the port in the URL (remote port) */
-  bit proxy:1; /* if set, this transfer is done through a proxy - any type */
-  bit httpproxy:1;  /* if set, this transfer is done through a http proxy */
-  bit socksproxy:1; /* if set, this transfer is done through a socks proxy */
-  bit user_passwd:1; /* do we use user+password for this connection? */
-  bit proxy_user_passwd:1; /* user+password for the proxy? */
-  bit ipv6_ip:1; /* we communicate with a remote site specified with pure IPv6
-                    IP address */
-  bit ipv6:1;    /* we communicate with a site using an IPv6 address */
-  bit do_more:1; /* this is set TRUE if the ->curl_do_more() function is
-                    supposed to be called, after ->curl_do() */
-  bit protoconnstart:1;/* the protocol layer has STARTED its operation after
-                          the TCP layer connect */
-  bit retry:1;         /* this connection is about to get closed and then
-                          re-attempted at another connection. */
-  bit tunnel_proxy:1;  /* if CONNECT is used to "tunnel" through the proxy.
-                          This is implicit when SSL-protocols are used through
-                          proxies, but can also be enabled explicitly by
-                          apps */
-  bit authneg:1;       /* TRUE when the auth phase has started, which means
-                          that we are creating a request with an auth header,
-                          but it is not the final request in the auth
-                          negotiation. */
-  bit rewindaftersend:1;/* TRUE when the sending couldn't be stopped even
-                           though it will be discarded. When the whole send
-                           operation is done, we must call the data rewind
-                           callback. */
+  BIT(close); /* if set, we close the connection after this request */
+  BIT(reuse); /* if set, this is a re-used connection */
+  BIT(altused); /* this is an alt-svc "redirect" */
+  BIT(conn_to_host); /* if set, this connection has a "connect to host"
+                        that overrides the host in the URL */
+  BIT(conn_to_port); /* if set, this connection has a "connect to port"
+                        that overrides the port in the URL (remote port) */
+  BIT(proxy); /* if set, this transfer is done through a proxy - any type */
+  BIT(httpproxy);  /* if set, this transfer is done through a http proxy */
+  BIT(socksproxy); /* if set, this transfer is done through a socks proxy */
+  BIT(user_passwd); /* do we use user+password for this connection? */
+  BIT(proxy_user_passwd); /* user+password for the proxy? */
+  BIT(ipv6_ip); /* we communicate with a remote site specified with pure IPv6
+                   IP address */
+  BIT(ipv6);    /* we communicate with a site using an IPv6 address */
+  BIT(do_more); /* this is set TRUE if the ->curl_do_more() function is
+                   supposed to be called, after ->curl_do() */
+  BIT(protoconnstart);/* the protocol layer has STARTED its operation after
+                         the TCP layer connect */
+  BIT(retry);         /* this connection is about to get closed and then
+                         re-attempted at another connection. */
+  BIT(tunnel_proxy);  /* if CONNECT is used to "tunnel" through the proxy.
+                         This is implicit when SSL-protocols are used through
+                         proxies, but can also be enabled explicitly by
+                         apps */
+  BIT(authneg);       /* TRUE when the auth phase has started, which means
+                         that we are creating a request with an auth header,
+                         but it is not the final request in the auth
+                         negotiation. */
+  BIT(rewindaftersend);/* TRUE when the sending couldn't be stopped even
+                          though it will be discarded. When the whole send
+                          operation is done, we must call the data rewind
+                          callback. */
 #ifndef CURL_DISABLE_FTP
-  bit ftp_use_epsv:1;  /* As set with CURLOPT_FTP_USE_EPSV, but if we find out
-                          EPSV doesn't work we disable it for the forthcoming
-                          requests */
-  bit ftp_use_eprt:1;  /* As set with CURLOPT_FTP_USE_EPRT, but if we find out
-                          EPRT doesn't work we disable it for the forthcoming
-                          requests */
-  bit ftp_use_data_ssl:1; /* Enabled SSL for the data connection */
+  BIT(ftp_use_epsv);  /* As set with CURLOPT_FTP_USE_EPSV, but if we find out
+                         EPSV doesn't work we disable it for the forthcoming
+                         requests */
+  BIT(ftp_use_eprt);  /* As set with CURLOPT_FTP_USE_EPRT, but if we find out
+                         EPRT doesn't work we disable it for the forthcoming
+                         requests */
+  BIT(ftp_use_data_ssl); /* Enabled SSL for the data connection */
 #endif
-  bit netrc:1;         /* name+password provided by netrc */
-  bit userpwd_in_url:1; /* name+password found in url */
-  bit stream_was_rewound:1; /* The stream was rewound after a request read
-                               past the end of its response byte boundary */
-  bit proxy_connect_closed:1; /* TRUE if a proxy disconnected the connection
-                                 in a CONNECT request with auth, so that
-                                 libcurl should reconnect and continue. */
-  bit bound:1; /* set true if bind() has already been done on this socket/
-                  connection */
-  bit type_set:1;  /* type= was used in the URL */
-  bit multiplex:1; /* connection is multiplexed */
-  bit tcp_fastopen:1; /* use TCP Fast Open */
-  bit tls_enable_npn:1;  /* TLS NPN extension? */
-  bit tls_enable_alpn:1; /* TLS ALPN extension? */
-  bit socksproxy_connecting:1; /* connecting through a socks proxy */
-  bit connect_only:1;
+  BIT(netrc);         /* name+password provided by netrc */
+  BIT(userpwd_in_url); /* name+password found in url */
+  BIT(stream_was_rewound); /* The stream was rewound after a request read
+                              past the end of its response byte boundary */
+  BIT(proxy_connect_closed); /* TRUE if a proxy disconnected the connection
+                                in a CONNECT request with auth, so that
+                                libcurl should reconnect and continue. */
+  BIT(bound); /* set true if bind() has already been done on this socket/
+                 connection */
+  BIT(type_set);  /* type= was used in the URL */
+  BIT(multiplex); /* connection is multiplexed */
+  BIT(tcp_fastopen); /* use TCP Fast Open */
+  BIT(tls_enable_npn);  /* TLS NPN extension? */
+  BIT(tls_enable_alpn); /* TLS ALPN extension? */
+  BIT(socksproxy_connecting); /* connecting through a socks proxy */
+  BIT(connect_only);
 };
 
 struct hostname {
@@ -494,7 +502,7 @@ struct Curl_async {
   struct Curl_dns_entry *dns;
   int status; /* if done is TRUE, this is the status from the callback */
   void *os_specific;  /* 'struct thread_data' for Windows */
-  bit done:1;  /* set TRUE when the lookup is complete */
+  BIT(done);  /* set TRUE when the lookup is complete */
 };
 
 #define FIRSTSOCKET     0
@@ -615,20 +623,20 @@ struct SingleRequest {
 #ifndef CURL_DISABLE_DOH
   struct dohdata doh; /* DoH specific data for this request */
 #endif
-  bit header:1;       /* incoming data has HTTP header */
-  bit content_range:1; /* set TRUE if Content-Range: was found */
-  bit upload_done:1;  /* set to TRUE when doing chunked transfer-encoding
-                         upload and we're uploading the last chunk */
-  bit ignorebody:1;   /* we read a response-body but we ignore it! */
-  bit http_bodyless:1; /* HTTP response status code is between 100 and 199,
-                          204 or 304 */
-  bit chunk:1; /* if set, this is a chunked transfer-encoding */
-  bit upload_chunky:1; /* set TRUE if we are doing chunked transfer-encoding
-                          on upload */
-  bit getheader:1;    /* TRUE if header parsing is wanted */
-  bit forbidchunk:1;  /* used only to explicitly forbid chunk-upload for
-                         specific upload buffers. See readmoredata() in http.c
-                         for details. */
+  BIT(header);       /* incoming data has HTTP header */
+  BIT(content_range); /* set TRUE if Content-Range: was found */
+  BIT(upload_done);  /* set to TRUE when doing chunked transfer-encoding
+                        upload and we're uploading the last chunk */
+  BIT(ignorebody);   /* we read a response-body but we ignore it! */
+  BIT(http_bodyless); /* HTTP response status code is between 100 and 199,
+                         204 or 304 */
+  BIT(chunk); /* if set, this is a chunked transfer-encoding */
+  BIT(upload_chunky); /* set TRUE if we are doing chunked transfer-encoding
+                         on upload */
+  BIT(getheader);    /* TRUE if header parsing is wanted */
+  BIT(forbidchunk);  /* used only to explicitly forbid chunk-upload for
+                        specific upload buffers. See readmoredata() in http.c
+                        for details. */
 };
 
 /*
@@ -777,8 +785,8 @@ struct http_connect_state {
     TUNNEL_CONNECT, /* CONNECT has been sent off */
     TUNNEL_COMPLETE /* CONNECT response received completely */
   } tunnel_state;
-  bit chunked_encoding:1;
-  bit close_connection:1;
+  BIT(chunked_encoding);
+  BIT(close_connection);
 };
 
 struct ldapconninfo;
@@ -953,7 +961,7 @@ struct connectdata {
   } allocptr;
 
 #ifdef HAVE_GSSAPI
-  bit sec_complete:1; /* if Kerberos is enabled for this connection */
+  BIT(sec_complete); /* if Kerberos is enabled for this connection */
   enum protection_level command_prot;
   enum protection_level data_prot;
   enum protection_level request_data_prot;
@@ -1046,16 +1054,16 @@ struct connectdata {
 
 #ifdef USE_UNIX_SOCKETS
   char *unix_domain_socket;
-  bit abstract_unix_socket:1;
+  BIT(abstract_unix_socket);
 #endif
-  bit tls_upgraded:1;
+  BIT(tls_upgraded);
   /* the two following *_inuse fields are only flags, not counters in any way.
      If TRUE it means the channel is in use, and if FALSE it means the channel
      is up for grabs by one. */
-  bit readchannel_inuse:1;  /* whether the read channel is in use by an easy
-                               handle */
-  bit writechannel_inuse:1; /* whether the write channel is in use by an easy
-                               handle */
+  BIT(readchannel_inuse);  /* whether the read channel is in use by an easy
+                              handle */
+  BIT(writechannel_inuse); /* whether the write channel is in use by an easy
+                              handle */
 };
 
 /* The end of connectdata. */
@@ -1097,8 +1105,8 @@ struct PureInfo {
                                  OpenSSL, GnuTLS, Schannel, NSS and GSKit
                                  builds. Asked for with CURLOPT_CERTINFO
                                  / CURLINFO_CERTINFO */
-  bit timecond:1;  /* set to TRUE if the time condition didn't match, which
-                      thus made the document NOT get fetched */
+  BIT(timecond);  /* set to TRUE if the time condition didn't match, which
+                     thus made the document NOT get fetched */
 };
 
 
@@ -1145,8 +1153,8 @@ struct Progress {
   curl_off_t speeder[ CURR_TIME ];
   struct curltime speeder_time[ CURR_TIME ];
   int speeder_c;
-  bit callback:1;  /* set when progress callback is used */
-  bit is_t_startransfer_set:1;
+  BIT(callback);  /* set when progress callback is used */
+  BIT(is_t_startransfer_set);
 };
 
 typedef enum {
@@ -1194,12 +1202,12 @@ struct auth {
   unsigned long picked;
   unsigned long avail; /* Bitmask for what the server reports to support for
                           this resource */
-  bit done:1;  /* TRUE when the auth phase is done and ready to do the
-                 *actual* request */
-  bit multipass:1; /* TRUE if this is not yet authenticated but within the
-                       auth multipass negotiation */
-  bit iestyle:1; /* TRUE if digest should be done IE-style or FALSE if it
-                     should be RFC compliant */
+  BIT(done);  /* TRUE when the auth phase is done and ready to do the
+                 actual request */
+  BIT(multipass); /* TRUE if this is not yet authenticated but within the
+                     auth multipass negotiation */
+  BIT(iestyle); /* TRUE if digest should be done IE-style or FALSE if it
+                   should be RFC compliant */
 };
 
 struct Curl_http2_dep {
@@ -1329,7 +1337,7 @@ struct UrlState {
 /* do FTP line-end conversions on most platforms */
 #define CURL_DO_LINEEND_CONV
   /* for FTP downloads: track CRLF sequences that span blocks */
-  bit prev_block_had_trailing_cr:1;
+  BIT(prev_block_had_trailing_cr);
   /* for FTP downloads: how many CRLFs did we converted to LFs? */
   curl_off_t crlf_conversions;
 #endif
@@ -1364,32 +1372,33 @@ struct UrlState {
   trailers_state trailers_state; /* whether we are sending trailers
                                        and what stage are we at */
 #ifdef CURLDEBUG
-  bit conncache_lock:1;
+  BIT(conncache_lock);
 #endif
   /* when curl_easy_perform() is called, the multi handle is "owned" by
      the easy handle so curl_easy_cleanup() on such an easy handle will
      also close the multi handle! */
-  bit multi_owned_by_easy:1;
+  BIT(multi_owned_by_easy);
 
-  bit this_is_a_follow:1; /* this is a followed Location: request */
-  bit refused_stream:1; /* this was refused, try again */
-  bit errorbuf:1; /* Set to TRUE if the error buffer is already filled in.
+  BIT(this_is_a_follow); /* this is a followed Location: request */
+  BIT(refused_stream); /* this was refused, try again */
+  BIT(errorbuf); /* Set to TRUE if the error buffer is already filled in.
                     This must be set to FALSE every time _easy_perform() is
                     called. */
-  bit allow_port:1; /* Is set.use_port allowed to take effect or not. This
+  BIT(allow_port); /* Is set.use_port allowed to take effect or not. This
                       is always set TRUE when curl_easy_perform() is called. */
-  bit authproblem:1; /* TRUE if there's some problem authenticating */
+  BIT(authproblem); /* TRUE if there's some problem authenticating */
   /* set after initial USER failure, to prevent an authentication loop */
-  bit ftp_trying_alternative:1;
-  bit wildcardmatch:1; /* enable wildcard matching */
-  bit expect100header:1;  /* TRUE if we added Expect: 100-continue */
-  bit use_range:1;
-  bit rangestringalloc:1; /* the range string is malloc()'ed */
-  bit done:1; /* set to FALSE when Curl_init_do() is called and set to TRUE
+  BIT(ftp_trying_alternative);
+  BIT(wildcardmatch); /* enable wildcard matching */
+  BIT(expect100header);  /* TRUE if we added Expect: 100-continue */
+  BIT(use_range);
+  BIT(rangestringalloc); /* the range string is malloc()'ed */
+  BIT(done); /* set to FALSE when Curl_init_do() is called and set to TRUE
                   when multi_done() is called, to prevent multi_done() to get
                   invoked twice when the multi interface is used. */
-  bit stream_depends_e:1; /* set or don't set the Exclusive bit */
-  bit previouslypending:1; /* this transfer WAS in the multi->pending queue */
+  BIT(stream_depends_e); /* set or don't set the Exclusive bit */
+  BIT(previouslypending); /* this transfer WAS in the multi->pending queue */
+  BIT(cookie_engine);
 };
 
 
@@ -1407,9 +1416,9 @@ struct DynamicStatic {
                                     curl_easy_setopt(COOKIEFILE) calls */
   struct curl_slist *resolve; /* set to point to the set.resolve list when
                                  this should be dealt with in pretransfer */
-  bit url_alloc:1;   /* URL string is malloc()'ed */
-  bit referer_alloc:1; /* referer string is malloc()ed */
-  bit wildcard_resolve:1; /* Set to true if any resolve change is a
+  BIT(url_alloc);   /* URL string is malloc()'ed */
+  BIT(referer_alloc); /* referer string is malloc()ed */
+  BIT(wildcard_resolve); /* Set to true if any resolve change is a
                               wildcard */
 };
 
@@ -1689,84 +1698,82 @@ struct UserDefined {
   CURLU *uh; /* URL handle for the current parsed URL */
   void *trailer_data; /* pointer to pass to trailer data callback */
   curl_trailer_callback trailer_callback; /* trailing data callback */
-  bit is_fread_set:1; /* has read callback been set to non-NULL? */
-  bit is_fwrite_set:1; /* has write callback been set to non-NULL? */
-  bit free_referer:1; /* set TRUE if 'referer' points to a string we
+  BIT(is_fread_set); /* has read callback been set to non-NULL? */
+  BIT(is_fwrite_set); /* has write callback been set to non-NULL? */
+  BIT(free_referer); /* set TRUE if 'referer' points to a string we
                         allocated */
-  bit tftp_no_options:1; /* do not send TFTP options requests */
-  bit sep_headers:1;     /* handle host and proxy headers separately */
-  bit cookiesession:1;   /* new cookie session? */
-  bit crlf:1;            /* convert crlf on ftp upload(?) */
-  bit strip_path_slash:1; /* strip off initial slash from path */
-  bit ssh_compression:1;            /* enable SSH compression */
+  BIT(tftp_no_options); /* do not send TFTP options requests */
+  BIT(sep_headers);     /* handle host and proxy headers separately */
+  BIT(cookiesession);   /* new cookie session? */
+  BIT(crlf);            /* convert crlf on ftp upload(?) */
+  BIT(strip_path_slash); /* strip off initial slash from path */
+  BIT(ssh_compression);            /* enable SSH compression */
 
 /* Here follows boolean settings that define how to behave during
    this session. They are STATIC, set by libcurl users or at least initially
    and they don't change during operations. */
-  bit get_filetime:1;     /* get the time and get of the remote file */
-  bit tunnel_thru_httpproxy:1; /* use CONNECT through a HTTP proxy */
-  bit prefer_ascii:1;     /* ASCII rather than binary */
-  bit ftp_append:1;       /* append, not overwrite, on upload */
-  bit ftp_list_only:1;    /* switch FTP command for listing directories */
+  BIT(get_filetime);     /* get the time and get of the remote file */
+  BIT(tunnel_thru_httpproxy); /* use CONNECT through a HTTP proxy */
+  BIT(prefer_ascii);     /* ASCII rather than binary */
+  BIT(ftp_append);       /* append, not overwrite, on upload */
+  BIT(ftp_list_only);    /* switch FTP command for listing directories */
 #ifndef CURL_DISABLE_FTP
-  bit ftp_use_port:1;     /* use the FTP PORT command */
-  bit ftp_use_epsv:1;   /* if EPSV is to be attempted or not */
-  bit ftp_use_eprt:1;   /* if EPRT is to be attempted or not */
-  bit ftp_use_pret:1;   /* if PRET is to be used before PASV or not */
-  bit ftp_skip_ip:1;    /* skip the IP address the FTP server passes on to
+  BIT(ftp_use_port);     /* use the FTP PORT command */
+  BIT(ftp_use_epsv);     /* if EPSV is to be attempted or not */
+  BIT(ftp_use_eprt);     /* if EPRT is to be attempted or not */
+  BIT(ftp_use_pret);     /* if PRET is to be used before PASV or not */
+  BIT(ftp_skip_ip);      /* skip the IP address the FTP server passes on to
                             us */
 #endif
-  bit hide_progress:1;    /* don't use the progress meter */
-  bit http_fail_on_error:1;  /* fail on HTTP error codes >= 400 */
-  bit http_keep_sending_on_error:1; /* for HTTP status codes >= 300 */
-  bit http_follow_location:1; /* follow HTTP redirects */
-  bit http_transfer_encoding:1; /* request compressed HTTP
-                                    transfer-encoding */
-  bit allow_auth_to_other_hosts:1;
-  bit include_header:1; /* include received protocol headers in data output */
-  bit http_set_referer:1; /* is a custom referer used */
-  bit http_auto_referer:1; /* set "correct" referer when following
-                               location: */
-  bit opt_no_body:1;    /* as set with CURLOPT_NOBODY */
-  bit upload:1;         /* upload request */
-  bit verbose:1;        /* output verbosity */
-  bit krb:1;            /* Kerberos connection requested */
-  bit reuse_forbid:1;   /* forbidden to be reused, close after use */
-  bit reuse_fresh:1;    /* do not re-use an existing connection  */
-
-  bit no_signal:1;      /* do not use any signal/alarm handler */
-  bit tcp_nodelay:1;    /* whether to enable TCP_NODELAY or not */
-  bit ignorecl:1;       /* ignore content length */
-  bit connect_only:1;   /* make connection, let application use the socket */
-  bit http_te_skip:1;   /* pass the raw body data to the user, even when
-                            transfer-encoded (chunked, compressed) */
-  bit http_ce_skip:1;   /* pass the raw body data to the user, even when
-                            content-encoded (chunked, compressed) */
-  bit proxy_transfer_mode:1; /* set transfer mode (;type=<a|i>) when doing
-                                 FTP via an HTTP proxy */
+  BIT(hide_progress);    /* don't use the progress meter */
+  BIT(http_fail_on_error);  /* fail on HTTP error codes >= 400 */
+  BIT(http_keep_sending_on_error); /* for HTTP status codes >= 300 */
+  BIT(http_follow_location); /* follow HTTP redirects */
+  BIT(http_transfer_encoding); /* request compressed HTTP transfer-encoding */
+  BIT(allow_auth_to_other_hosts);
+  BIT(include_header); /* include received protocol headers in data output */
+  BIT(http_set_referer); /* is a custom referer used */
+  BIT(http_auto_referer); /* set "correct" referer when following
+                             location: */
+  BIT(opt_no_body);    /* as set with CURLOPT_NOBODY */
+  BIT(upload);         /* upload request */
+  BIT(verbose);        /* output verbosity */
+  BIT(krb);            /* Kerberos connection requested */
+  BIT(reuse_forbid);   /* forbidden to be reused, close after use */
+  BIT(reuse_fresh);    /* do not re-use an existing connection  */
+  BIT(no_signal);      /* do not use any signal/alarm handler */
+  BIT(tcp_nodelay);    /* whether to enable TCP_NODELAY or not */
+  BIT(ignorecl);       /* ignore content length */
+  BIT(connect_only);   /* make connection, let application use the socket */
+  BIT(http_te_skip);   /* pass the raw body data to the user, even when
+                          transfer-encoded (chunked, compressed) */
+  BIT(http_ce_skip);   /* pass the raw body data to the user, even when
+                          content-encoded (chunked, compressed) */
+  BIT(proxy_transfer_mode); /* set transfer mode (;type=<a|i>) when doing
+                               FTP via an HTTP proxy */
 #if defined(HAVE_GSSAPI) || defined(USE_WINDOWS_SSPI)
-  bit socks5_gssapi_nec:1; /* Flag to support NEC SOCKS5 server */
+  BIT(socks5_gssapi_nec); /* Flag to support NEC SOCKS5 server */
 #endif
-  bit sasl_ir:1;         /* Enable/disable SASL initial response */
-  bit wildcard_enabled:1; /* enable wildcard matching */
-  bit tcp_keepalive:1;  /* use TCP keepalives */
-  bit tcp_fastopen:1;   /* use TCP Fast Open */
-  bit ssl_enable_npn:1; /* TLS NPN extension? */
-  bit ssl_enable_alpn:1;/* TLS ALPN extension? */
-  bit path_as_is:1;     /* allow dotdots? */
-  bit pipewait:1;       /* wait for multiplex status before starting a new
-                           connection */
-  bit suppress_connect_headers:1; /* suppress proxy CONNECT response headers
-                                      from user callbacks */
-  bit dns_shuffle_addresses:1; /* whether to shuffle addresses before use */
-  bit stream_depends_e:1; /* set or don't set the Exclusive bit */
-  bit haproxyprotocol:1; /* whether to send HAProxy PROXY protocol v1
-                             header */
-  bit abstract_unix_socket:1;
-  bit disallow_username_in_url:1; /* disallow username in url */
-  bit doh:1; /* DNS-over-HTTPS enabled */
-  bit doh_get:1; /* use GET for DoH requests, instead of POST */
-  bit http09_allowed:1; /* allow HTTP/0.9 responses */
+  BIT(sasl_ir);         /* Enable/disable SASL initial response */
+  BIT(wildcard_enabled); /* enable wildcard matching */
+  BIT(tcp_keepalive);  /* use TCP keepalives */
+  BIT(tcp_fastopen);   /* use TCP Fast Open */
+  BIT(ssl_enable_npn); /* TLS NPN extension? */
+  BIT(ssl_enable_alpn);/* TLS ALPN extension? */
+  BIT(path_as_is);     /* allow dotdots? */
+  BIT(pipewait);       /* wait for multiplex status before starting a new
+                          connection */
+  BIT(suppress_connect_headers); /* suppress proxy CONNECT response headers
+                                    from user callbacks */
+  BIT(dns_shuffle_addresses); /* whether to shuffle addresses before use */
+  BIT(stream_depends_e); /* set or don't set the Exclusive bit */
+  BIT(haproxyprotocol); /* whether to send HAProxy PROXY protocol v1
+                           header */
+  BIT(abstract_unix_socket);
+  BIT(disallow_username_in_url); /* disallow username in url */
+  BIT(doh); /* DNS-over-HTTPS enabled */
+  BIT(doh_get); /* use GET for DoH requests, instead of POST */
+  BIT(http09_allowed); /* allow HTTP/0.9 responses */
 };
 
 struct Names {
diff --git a/lib/vauth/vauth.h b/lib/vauth/vauth.h
index 8838d9ee4..b5c5b7165 100644
--- a/lib/vauth/vauth.h
+++ b/lib/vauth/vauth.h
@@ -43,7 +43,7 @@ struct negotiatedata;
 #endif
 
 #if defined(USE_WINDOWS_SSPI)
-#define GSS_ERROR(status) (status & 0x80000000)
+#define GSS_ERROR(status) ((status) & 0x80000000)
 #endif
 
 /* This is used to build a SPN string */
diff --git a/lib/version.c b/lib/version.c
index caefef919..50ffc3f4e 100644
--- a/lib/version.c
+++ b/lib/version.c
@@ -104,14 +104,12 @@ char *curl_version(void)
   left -= len;
   ptr += len;
 
-  if(left > 1) {
-    len = Curl_ssl_version(ptr + 1, left - 1);
+  len = Curl_ssl_version(ptr + 1, left - 1);
 
-    if(len > 0) {
-      *ptr = ' ';
-      left -= ++len;
-      ptr += len;
-    }
+  if(len > 0) {
+    *ptr = ' ';
+    left -= ++len;
+    ptr += len;
   }
 
 #ifdef HAVE_LIBZ
@@ -368,6 +366,9 @@ static curl_version_info_data version_info = {
 #endif
 #if defined(USE_ALTSVC)
   | CURL_VERSION_ALTSVC
+#endif
+#ifdef USE_ESNI
+  | CURL_VERSION_ESNI
 #endif
   ,
   NULL, /* ssl_version */
diff --git a/lib/vquic/ngtcp2.c b/lib/vquic/ngtcp2.c
index 6b3d53ee0..c0f9b16e3 100644
--- a/lib/vquic/ngtcp2.c
+++ b/lib/vquic/ngtcp2.c
@@ -43,7 +43,9 @@
 #include "memdebug.h"
 
 /* #define DEBUG_NGTCP2 */
+#ifdef CURLDEBUG
 #define DEBUG_HTTP3
+#endif
 #ifdef DEBUG_HTTP3
 #define H3BUGF(x) x
 #else
@@ -138,13 +140,13 @@ static void quic_settings(ngtcp2_settings *s,
   s->log_printf = NULL;
 #endif
   s->initial_ts = timestamp();
-  s->max_stream_data_bidi_local = stream_buffer_size;
-  s->max_stream_data_bidi_remote = QUIC_MAX_STREAMS;
-  s->max_stream_data_uni = QUIC_MAX_STREAMS;
-  s->max_data = QUIC_MAX_DATA;
-  s->max_streams_bidi = 1;
-  s->max_streams_uni = 3;
-  s->idle_timeout = QUIC_IDLE_TIMEOUT;
+  s->transport_params.initial_max_stream_data_bidi_local = stream_buffer_size;
+  s->transport_params.initial_max_stream_data_bidi_remote = QUIC_MAX_STREAMS;
+  s->transport_params.initial_max_stream_data_uni = QUIC_MAX_STREAMS;
+  s->transport_params.initial_max_data = QUIC_MAX_DATA;
+  s->transport_params.initial_max_streams_bidi = 1;
+  s->transport_params.initial_max_streams_uni = 3;
+  s->transport_params.idle_timeout = QUIC_IDLE_TIMEOUT;
 }
 
 static FILE *keylog_file; /* not thread-safe */
@@ -204,7 +206,7 @@ static int quic_add_handshake_data(SSL *ssl, 
OSSL_ENCRYPTION_LEVEL ossl_level,
       qs->qconn, level, (uint8_t *)(&crypto_data->buf[crypto_data->len] - len),
       len);
   if(rv) {
-    fprintf(stderr, "write_client_handshake failed\n");
+    H3BUGF(fprintf(stderr, "write_client_handshake failed\n"));
   }
   assert(0 == rv);
 
@@ -548,7 +550,7 @@ CURLcode Curl_quic_connect(struct connectdata *conn,
   if(!Curl_addr2string((struct sockaddr*)addr, addrlen, ipbuf, &port)) {
     char buffer[STRERROR_LEN];
     failf(data, "ssrem inet_ntop() failed with errno %d: %s",
-          errno, Curl_strerror(errno, buffer, sizeof(buffer)));
+          SOCKERRNO, Curl_strerror(SOCKERRNO, buffer, sizeof(buffer)));
     return CURLE_BAD_FUNCTION_ARGUMENT;
   }
 
@@ -597,12 +599,11 @@ CURLcode Curl_quic_connect(struct connectdata *conn,
 
   ngtcp2_conn_get_local_transport_params(qs->qconn, &params);
   nwrite = ngtcp2_encode_transport_params(
-      paramsbuf, sizeof(paramsbuf), NGTCP2_TRANSPORT_PARAMS_TYPE_CLIENT_HELLO,
-      &params);
+    paramsbuf, sizeof(paramsbuf), NGTCP2_TRANSPORT_PARAMS_TYPE_CLIENT_HELLO,
+    &params);
   if(nwrite < 0) {
-    fprintf(stderr, "ngtcp2_encode_transport_params: %s\n",
-            ngtcp2_strerror((int)nwrite));
-
+    failf(data, "ngtcp2_encode_transport_params: %s\n",
+          ngtcp2_strerror((int)nwrite));
     return CURLE_FAILED_INIT;
   }
 
@@ -699,7 +700,7 @@ static int cb_h3_stream_close(nghttp3_conn *conn, int64_t 
stream_id,
   (void)stream_id;
   (void)app_error_code;
   (void)user_data;
-  fprintf(stderr, "cb_h3_stream_close CALLED\n");
+  H3BUGF(infof(data, "cb_h3_stream_close CALLED\n"));
 
   stream->closed = TRUE;
   Curl_expire(data, 0, EXPIRE_QUIC);
@@ -715,7 +716,7 @@ static int cb_h3_recv_data(nghttp3_conn *conn, int64_t 
stream_id,
   struct Curl_easy *data = stream_user_data;
   struct HTTP *stream = data->req.protop;
   (void)conn;
-  fprintf(stderr, "cb_h3_recv_data CALLED with %d bytes\n", buflen);
+  H3BUGF(infof(data, "cb_h3_recv_data CALLED with %d bytes\n", buflen));
 
   /* TODO: this needs to be handled properly */
   DEBUGASSERT(buflen <= stream->len);
@@ -749,7 +750,6 @@ static int cb_h3_deferred_consume(nghttp3_conn *conn, 
int64_t stream_id,
   struct quicsocket *qs = user_data;
   (void)conn;
   (void)stream_user_data;
-  fprintf(stderr, "cb_h3_deferred_consume CALLED\n");
 
   ngtcp2_conn_extend_max_stream_offset(qs->qconn, stream_id, consumed);
   ngtcp2_conn_extend_max_offset(qs->qconn, consumed);
@@ -818,8 +818,6 @@ static int cb_h3_recv_header(nghttp3_conn *conn, int64_t 
stream_id,
   (void)flags;
   (void)user_data;
 
-  fprintf(stderr, "cb_h3_recv_header called!\n");
-
   if(h3name.len == sizeof(":status") - 1 &&
      !memcmp(":status", h3name.base, h3name.len)) {
     int status = decode_status_code(h3val.base, h3val.len);
@@ -849,7 +847,6 @@ static int cb_h3_send_stop_sending(nghttp3_conn *conn, 
int64_t stream_id,
   (void)app_error_code;
   (void)user_data;
   (void)stream_user_data;
-  fprintf(stderr, "cb_h3_send_stop_sending CALLED\n");
   return 0;
 }
 
@@ -947,9 +944,6 @@ static ssize_t ngh3_stream_recv(struct connectdata *conn,
   struct HTTP *stream = conn->data->req.protop;
   struct quicsocket *qs = conn->quic;
 
-  fprintf(stderr, "ngh3_stream_recv CALLED (easy %p, socket %d)\n",
-          conn->data, sockfd);
-
   if(!stream->memlen) {
     /* remember where to store incoming data for this stream and how big the
        buffer is */
@@ -1003,17 +997,18 @@ static int cb_h3_acked_stream_data(nghttp3_conn *conn, 
int64_t stream_id,
 
   if(!data->set.postfields) {
     stream->h3out->used -= datalen;
-    fprintf(stderr, "cb_h3_acked_stream_data, %zd bytes, %zd left unacked\n",
-            datalen, stream->h3out->used);
+    H3BUGF(infof(data,
+                 "cb_h3_acked_stream_data, %zd bytes, %zd left unacked\n",
+                 datalen, stream->h3out->used));
     DEBUGASSERT(stream->h3out->used < H3_SEND_SIZE);
   }
   return 0;
 }
 
-static int cb_h3_readfunction(nghttp3_conn *conn, int64_t stream_id,
-                              const uint8_t **pdata,
-                              size_t *pdatalen, uint32_t *pflags,
-                              void *user_data, void *stream_user_data)
+static ssize_t cb_h3_readfunction(nghttp3_conn *conn, int64_t stream_id,
+                                  nghttp3_vec *vec, size_t veccnt,
+                                  uint32_t *pflags, void *user_data,
+                                  void *stream_user_data)
 {
   struct Curl_easy *data = stream_user_data;
   size_t nread;
@@ -1021,12 +1016,13 @@ static int cb_h3_readfunction(nghttp3_conn *conn, 
int64_t stream_id,
   (void)conn;
   (void)stream_id;
   (void)user_data;
+  (void)veccnt;
 
   if(data->set.postfields) {
-    *pdata = data->set.postfields;
-    *pdatalen = data->state.infilesize;
+    vec[0].base = data->set.postfields;
+    vec[0].len = data->state.infilesize;
     *pflags = NGHTTP3_DATA_FLAG_EOF;
-    return 0;
+    return 1;
   }
 
   nread = CURLMIN(stream->upload_len, H3_SEND_SIZE - stream->h3out->used);
@@ -1044,8 +1040,8 @@ static int cb_h3_readfunction(nghttp3_conn *conn, int64_t 
stream_id,
     out->used += nread;
 
     /* that's the chunk we return to nghttp3 */
-    *pdata = &out->buf[out->windex];
-    *pdatalen = nread;
+    vec[0].base = &out->buf[out->windex];
+    vec[0].len = nread;
 
     if(out->windex == H3_SEND_SIZE)
       out->windex = 0; /* wrap */
@@ -1056,22 +1052,20 @@ static int cb_h3_readfunction(nghttp3_conn *conn, 
int64_t stream_id,
       if(!stream->upload_left)
         *pflags = NGHTTP3_DATA_FLAG_EOF;
     }
-    fprintf(stderr, "cb_h3_readfunction %zd bytes%s (at %zd unacked)\n",
-            nread, *pflags == NGHTTP3_DATA_FLAG_EOF?" EOF":"",
-            out->used);
+    H3BUGF(infof(data, "cb_h3_readfunction %zd bytes%s (at %zd unacked)\n",
+                 nread, *pflags == NGHTTP3_DATA_FLAG_EOF?" EOF":"",
+                 out->used));
   }
   if(stream->upload_done && !stream->upload_len &&
      (stream->upload_left <= 0)) {
     H3BUGF(infof(data, "!!!!!!!!! cb_h3_readfunction sets EOF\n"));
-    *pdata = NULL;
-    *pdatalen = 0;
     *pflags = NGHTTP3_DATA_FLAG_EOF;
+    return 0;
   }
   else if(!nread) {
-    *pdatalen = 0;
     return NGHTTP3_ERR_WOULDBLOCK;
   }
-  return 0;
+  return 1;
 }
 
 /* Index where :authority header field will appear in request header
@@ -1202,8 +1196,10 @@ static CURLcode http_request(struct connectdata *conn, 
const void *mem,
       nva[i].namelen = strlen((char *)nva[i].name);
     }
     else {
-      nva[i].name = (unsigned char *)hdbuf;
       nva[i].namelen = (size_t)(end - hdbuf);
+      /* Lower case the header name for HTTP/3 */
+      Curl_strntolower((char *)hdbuf, hdbuf, nva[i].namelen);
+      nva[i].name = (unsigned char *)hdbuf;
     }
     nva[i].flags = NGHTTP3_NV_FLAG_NONE;
     hdbuf = end + 1;
@@ -1332,7 +1328,8 @@ static ssize_t ngh3_stream_send(struct connectdata *conn,
     sent = len;
   }
   else {
-    fprintf(stderr, "ngh3_stream_send() wants to send %zd bytes\n", len);
+    H3BUGF(infof(conn->data, "ngh3_stream_send() wants to send %zd bytes\n",
+                 len));
     if(!stream->upload_len) {
       stream->upload_mem = mem;
       stream->upload_len = len;
@@ -1407,13 +1404,13 @@ static CURLcode ng_process_ingress(struct connectdata 
*conn, int sockfd,
 
   for(;;) {
     remote_addrlen = sizeof(remote_addr);
-    while((recvd = recvfrom(sockfd, buf, bufsize, MSG_DONTWAIT,
+    while((recvd = recvfrom(sockfd, buf, bufsize, 0,
                             (struct sockaddr *)&remote_addr,
                             &remote_addrlen)) == -1 &&
-          errno == EINTR)
+          SOCKERRNO == EINTR)
       ;
     if(recvd == -1) {
-      if(errno == EAGAIN || errno == EWOULDBLOCK)
+      if(SOCKERRNO == EAGAIN || SOCKERRNO == EWOULDBLOCK)
         break;
 
       failf(conn->data, "ngtcp2: recvfrom() unexpectedly returned %d", recvd);
@@ -1524,7 +1521,7 @@ static CURLcode ng_flush_egress(struct connectdata *conn, 
int sockfd,
             return CURLE_SEND_ERROR;
           }
         }
-        else if(ndatalen > 0) {
+        else if(ndatalen >= 0) {
           rv = nghttp3_conn_add_write_offset(qs->h3conn, stream_id, ndatalen);
           if(rv != 0) {
             failf(conn->data,
@@ -1547,19 +1544,17 @@ static CURLcode ng_flush_egress(struct connectdata 
*conn, int sockfd,
     }
 
     memcpy(&remote_addr, ps.path.remote.addr, ps.path.remote.addrlen);
-    while((sent = sendto(sockfd, out, outlen, MSG_DONTWAIT,
-                         (struct sockaddr *)&remote_addr,
-                         (socklen_t)ps.path.remote.addrlen)) == -1 &&
-          errno == EINTR)
+    while((sent = send(sockfd, out, outlen, 0)) == -1 &&
+          SOCKERRNO == EINTR)
       ;
 
     if(sent == -1) {
-      if(errno == EAGAIN || errno == EWOULDBLOCK) {
+      if(SOCKERRNO == EAGAIN || SOCKERRNO == EWOULDBLOCK) {
         /* TODO Cache packet */
         break;
       }
       else {
-        failf(conn->data, "sendto() returned %zd (errno %d)\n", sent,
+        failf(conn->data, "send() returned %zd (errno %d)\n", sent,
               SOCKERRNO);
         return CURLE_SEND_ERROR;
       }
@@ -1589,8 +1584,6 @@ CURLcode Curl_quic_done_sending(struct connectdata *conn)
     /* only for HTTP/3 transfers */
     struct HTTP *stream = conn->data->req.protop;
     struct quicsocket *qs = conn->quic;
-    fprintf(stderr, "!!! Curl_quic_done_sending stream %zu\n",
-            stream->stream3_id);
     stream->upload_done = TRUE;
     (void)nghttp3_conn_resume_stream(qs->h3conn, stream->stream3_id);
   }
diff --git a/lib/vquic/quiche.c b/lib/vquic/quiche.c
index 7f9b34a1e..0ee360d07 100644
--- a/lib/vquic/quiche.c
+++ b/lib/vquic/quiche.c
@@ -50,7 +50,7 @@
 
 #define QUIC_MAX_STREAMS (256*1024)
 #define QUIC_MAX_DATA (1*1024*1024)
-#define QUIC_IDLE_TIMEOUT 60 * 1000 /* milliseconds */
+#define QUIC_IDLE_TIMEOUT (60 * 1000) /* milliseconds */
 
 static CURLcode process_ingress(struct connectdata *conn,
                                 curl_socket_t sockfd,
@@ -203,17 +203,17 @@ CURLcode Curl_quic_connect(struct connectdata *conn, 
curl_socket_t sockfd,
   if(result)
     return result;
 
-#if 0
   /* store the used address as a string */
-  if(!Curl_addr2string((struct sockaddr*)addr,
+  if(!Curl_addr2string((struct sockaddr*)addr, addrlen,
                        conn->primary_ip, &conn->primary_port)) {
     char buffer[STRERROR_LEN];
     failf(data, "ssrem inet_ntop() failed with errno %d: %s",
-          errno, Curl_strerror(errno, buffer, sizeof(buffer)));
+          SOCKERRNO, Curl_strerror(SOCKERRNO, buffer, sizeof(buffer)));
     return CURLE_BAD_FUNCTION_ARGUMENT;
   }
   memcpy(conn->ip_addr_str, conn->primary_ip, MAX_IPADR_LEN);
-#endif
+  Curl_persistconninfo(conn);
+
   /* for connection reuse purposes: */
   conn->ssl[FIRSTSOCKET].state = ssl_connection_complete;
 
@@ -237,7 +237,7 @@ static CURLcode quiche_has_connected(struct connectdata 
*conn,
   conn->httpversion = 30;
   conn->bundle->multiuse = BUNDLE_MULTIPLEX;
 
-  qs->h3config = quiche_h3_config_new(0, 1024, 0, 0);
+  qs->h3config = quiche_h3_config_new();
   if(!qs->h3config)
     return CURLE_OUT_OF_MEMORY;
 
@@ -301,7 +301,7 @@ static CURLcode process_ingress(struct connectdata *conn, 
int sockfd,
 
   do {
     recvd = recv(sockfd, buf, bufsize, 0);
-    if((recvd < 0) && ((errno == EAGAIN) || (errno == EWOULDBLOCK)))
+    if((recvd < 0) && ((SOCKERRNO == EAGAIN) || (SOCKERRNO == EWOULDBLOCK)))
       break;
 
     if(recvd < 0) {
@@ -404,13 +404,14 @@ static ssize_t h3_stream_recv(struct connectdata *conn,
   quiche_h3_event *ev;
   int rc;
   struct h3h1header headers;
-  struct HTTP *stream = conn->data->req.protop;
+  struct Curl_easy *data = conn->data;
+  struct HTTP *stream = data->req.protop;
   headers.dest = buf;
   headers.destlen = buffersize;
   headers.nlen = 0;
 
   if(process_ingress(conn, sockfd, qs)) {
-    infof(conn->data, "h3_stream_recv returns on ingress\n");
+    infof(data, "h3_stream_recv returns on ingress\n");
     *curlcode = CURLE_RECV_ERROR;
     return -1;
   }
@@ -423,7 +424,7 @@ static ssize_t h3_stream_recv(struct connectdata *conn,
 
     if(s != stream->stream3_id) {
       /* another transfer, ignore for now */
-      infof(conn->data, "Got h3 for stream %u, expects %u\n",
+      infof(data, "Got h3 for stream %u, expects %u\n",
             s, stream->stream3_id);
       continue;
     }
@@ -458,9 +459,7 @@ static ssize_t h3_stream_recv(struct connectdata *conn,
       break;
 
     case QUICHE_H3_EVENT_FINISHED:
-      if(quiche_conn_close(qs->conn, true, 0, NULL, 0) < 0) {
-        ;
-      }
+      streamclose(conn, "End of stream");
       recvd = 0; /* end of stream */
       break;
     default:
@@ -477,7 +476,9 @@ static ssize_t h3_stream_recv(struct connectdata *conn,
   *curlcode = (-1 == recvd)? CURLE_AGAIN : CURLE_OK;
   if(recvd >= 0)
     /* Get this called again to drain the event queue */
-    Curl_expire(conn->data, 0, EXPIRE_QUIC);
+    Curl_expire(data, 0, EXPIRE_QUIC);
+
+  data->state.drain = (recvd >= 0) ? 1 : 0;
   return recvd;
 }
 
@@ -646,8 +647,10 @@ static CURLcode http_request(struct connectdata *conn, 
const void *mem,
       nva[i].name_len = strlen((char *)nva[i].name);
     }
     else {
-      nva[i].name = (unsigned char *)hdbuf;
       nva[i].name_len = (size_t)(end - hdbuf);
+      /* Lower case the header name for HTTP/3 */
+      Curl_strntolower((char *)hdbuf, hdbuf, nva[i].name_len);
+      nva[i].name = (unsigned char *)hdbuf;
     }
     hdbuf = end + 1;
     while(*hdbuf == ' ' || *hdbuf == '\t')
diff --git a/lib/vssh/libssh.c b/lib/vssh/libssh.c
index 7b42b5578..6bd2ade80 100644
--- a/lib/vssh/libssh.c
+++ b/lib/vssh/libssh.c
@@ -493,7 +493,7 @@ restart:
         return SSH_ERROR;
 
       nprompts = ssh_userauth_kbdint_getnprompts(sshc->ssh_session);
-      if(nprompts == SSH_ERROR || nprompts != 1)
+      if(nprompts != 1)
         return SSH_ERROR;
 
       rc = ssh_userauth_kbdint_setanswer(sshc->ssh_session, 0, conn->passwd);
@@ -1356,7 +1356,7 @@ static CURLcode myssh_statemach_act(struct connectdata 
*conn, bool *block)
           break;
         }
       }
-      else if(sshc->readdir_attrs == NULL && sftp_dir_eof(sshc->sftp_dir)) {
+      else if(sftp_dir_eof(sshc->sftp_dir)) {
         state(conn, SSH_SFTP_READDIR_DONE);
         break;
       }
@@ -1999,7 +1999,7 @@ static CURLcode myssh_block_statemach(struct connectdata 
*conn,
       }
     }
 
-    if(!result && block) {
+    if(block) {
       curl_socket_t fd_read = conn->sock[FIRSTSOCKET];
       /* wait for the socket to become ready */
       (void) Curl_socket_check(fd_read, CURL_SOCKET_BAD,
diff --git a/lib/vssh/libssh2.c b/lib/vssh/libssh2.c
index b9cf0c808..2429d5f55 100644
--- a/lib/vssh/libssh2.c
+++ b/lib/vssh/libssh2.c
@@ -2811,7 +2811,7 @@ static CURLcode ssh_block_statemach(struct connectdata 
*conn,
     }
 
 #ifdef HAVE_LIBSSH2_SESSION_BLOCK_DIRECTION
-    if(!result && block) {
+    if(block) {
       int dir = libssh2_session_block_directions(sshc->ssh_session);
       curl_socket_t sock = conn->sock[FIRSTSOCKET];
       curl_socket_t fd_read = CURL_SOCKET_BAD;
@@ -2822,7 +2822,7 @@ static CURLcode ssh_block_statemach(struct connectdata 
*conn,
         fd_write = sock;
       /* wait for the socket to become ready */
       (void)Curl_socket_check(fd_read, CURL_SOCKET_BAD, fd_write,
-                              left>1000?1000:left); /* ignore result */
+                              left>1000?1000:(time_t)left);
     }
 #endif
 
diff --git a/lib/vtls/gskit.c b/lib/vtls/gskit.c
index 40cd9e1de..c1b3a2653 100644
--- a/lib/vtls/gskit.c
+++ b/lib/vtls/gskit.c
@@ -5,7 +5,7 @@
  *                            | (__| |_| |  _ <| |___
  *                             \___|\___/|_| \_\_____|
  *
- * Copyright (C) 1998 - 2018, Daniel Stenberg, <address@hidden>, et al.
+ * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
  *
  * This software is licensed as described in the file COPYING, which
  * you should have received as part of this distribution. The terms
@@ -26,6 +26,8 @@
 
 #include <gskssl.h>
 #include <qsoasync.h>
+#undef HAVE_SOCKETPAIR /* because the native one isn't good enough */
+#include "socketpair.h"
 
 /* Some symbols are undefined/unsupported on OS400 versions < V7R1. */
 #ifndef GSK_SSL_EXTN_SERVERNAME_REQUEST
@@ -511,100 +513,6 @@ static void close_async_handshake(struct ssl_connect_data 
*connssl)
   BACKEND->iocport = -1;
 }
 
-/* SSL over SSL
- * Problems:
- * 1) GSKit can only perform SSL on an AF_INET or AF_INET6 stream socket. To
- *    pipe an SSL stream into another, it is therefore needed to have a pair
- *    of such communicating sockets and handle the pipelining explicitly.
- * 2) OS/400 socketpair() is only implemented for domain AF_UNIX, thus cannot
- *    be used to produce the pipeline.
- * The solution is to simulate socketpair() for AF_INET with low-level API
- *    listen(), bind() and connect().
- */
-
-static int
-inetsocketpair(int sv[2])
-{
-  int lfd;      /* Listening socket. */
-  int sfd;      /* Server socket. */
-  int cfd;      /* Client socket. */
-  int len;
-  struct sockaddr_in addr1;
-  struct sockaddr_in addr2;
-
-  /* Create listening socket on a local dynamic port. */
-  lfd = socket(AF_INET, SOCK_STREAM, 0);
-  if(lfd < 0)
-    return -1;
-  memset((char *) &addr1, 0, sizeof(addr1));
-  addr1.sin_family = AF_INET;
-  addr1.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
-  addr1.sin_port = 0;
-  if(bind(lfd, (struct sockaddr *) &addr1, sizeof(addr1)) ||
-     listen(lfd, 2) < 0) {
-    close(lfd);
-    return -1;
-  }
-
-  /* Get the allocated port. */
-  len = sizeof(addr1);
-  if(getsockname(lfd, (struct sockaddr *) &addr1, &len) < 0) {
-    close(lfd);
-    return -1;
-  }
-
-  /* Create the client socket. */
-  cfd = socket(AF_INET, SOCK_STREAM, 0);
-  if(cfd < 0) {
-    close(lfd);
-    return -1;
-  }
-
-  /* Request unblocking connection to the listening socket. */
-  curlx_nonblock(cfd, TRUE);
-  if(connect(cfd, (struct sockaddr *) &addr1, sizeof(addr1)) < 0 &&
-     errno != EINPROGRESS) {
-    close(lfd);
-    close(cfd);
-    return -1;
-  }
-
-  /* Get the client dynamic port for intrusion check below. */
-  len = sizeof(addr2);
-  if(getsockname(cfd, (struct sockaddr *) &addr2, &len) < 0) {
-    close(lfd);
-    close(cfd);
-    return -1;
-  }
-
-  /* Accept the incoming connection and get the server socket. */
-  curlx_nonblock(lfd, TRUE);
-  for(;;) {
-    len = sizeof(addr1);
-    sfd = accept(lfd, (struct sockaddr *) &addr1, &len);
-    if(sfd < 0) {
-      close(lfd);
-      close(cfd);
-      return -1;
-    }
-
-    /* Check for possible intrusion from an external process. */
-    if(addr1.sin_addr.s_addr == addr2.sin_addr.s_addr &&
-       addr1.sin_port == addr2.sin_port)
-      break;
-
-    /* Intrusion: reject incoming connection. */
-    close(sfd);
-  }
-
-  /* Done, return sockets and succeed. */
-  close(lfd);
-  curlx_nonblock(cfd, FALSE);
-  sv[0] = cfd;
-  sv[1] = sfd;
-  return 0;
-}
-
 static int pipe_ssloverssl(struct connectdata *conn, int sockindex,
                            int directions)
 {
@@ -855,7 +763,7 @@ static CURLcode gskit_connect_step1(struct connectdata 
*conn, int sockindex)
 
   /* Establish a pipelining socket pair for SSL over SSL. */
   if(conn->proxy_ssl[sockindex].use) {
-    if(inetsocketpair(sockpair))
+    if(Curl_socketpair(0, 0, 0, sockpair))
       return CURLE_SSL_CONNECT_ERROR;
     BACKEND->localfd = sockpair[0];
     BACKEND->remotefd = sockpair[1];
@@ -1157,7 +1065,7 @@ static CURLcode gskit_connect_common(struct connectdata 
*conn, int sockindex,
 {
   struct Curl_easy *data = conn->data;
   struct ssl_connect_data *connssl = &conn->ssl[sockindex];
-  long timeout_ms;
+  timediff_t timeout_ms;
   CURLcode result = CURLE_OK;
 
   *done = connssl->state == ssl_connection_complete;
diff --git a/lib/vtls/gtls.c b/lib/vtls/gtls.c
index 8693cdce3..3737d7c68 100644
--- a/lib/vtls/gtls.c
+++ b/lib/vtls/gtls.c
@@ -288,7 +288,7 @@ static CURLcode handshake(struct connectdata *conn,
   curl_socket_t sockfd = conn->sock[sockindex];
 
   for(;;) {
-    time_t timeout_ms;
+    timediff_t timeout_ms;
     int rc;
 
     /* check allowed time left */
@@ -311,7 +311,7 @@ static CURLcode handshake(struct connectdata *conn,
 
       what = Curl_socket_check(readfd, CURL_SOCKET_BAD, writefd,
                                nonblocking?0:
-                               timeout_ms?timeout_ms:1000);
+                               timeout_ms?(time_t)timeout_ms:1000);
       if(what < 0) {
         /* fatal error */
         failf(data, "select/poll on SSL socket, errno: %d", SOCKERRNO);
@@ -1608,7 +1608,7 @@ static ssize_t gtls_send(struct connectdata *conn,
 static void close_one(struct ssl_connect_data *connssl)
 {
   if(BACKEND->session) {
-    gnutls_bye(BACKEND->session, GNUTLS_SHUT_RDWR);
+    gnutls_bye(BACKEND->session, GNUTLS_SHUT_WR);
     gnutls_deinit(BACKEND->session);
     BACKEND->session = NULL;
   }
diff --git a/lib/vtls/mbedtls.c b/lib/vtls/mbedtls.c
index 63d1f4c81..e34ec9d13 100644
--- a/lib/vtls/mbedtls.c
+++ b/lib/vtls/mbedtls.c
@@ -588,6 +588,9 @@ mbed_connect_step2(struct connectdata *conn,
     else if(ret & MBEDTLS_X509_BADCERT_NOT_TRUSTED)
       failf(data, "Cert verify failed: BADCERT_NOT_TRUSTED");
 
+    else if(ret & MBEDTLS_X509_BADCERT_FUTURE)
+      failf(data, "Cert verify failed: BADCERT_FUTURE");
+
     return CURLE_PEER_FAILED_VERIFICATION;
   }
 
@@ -884,7 +887,7 @@ mbed_connect_common(struct connectdata *conn,
   struct Curl_easy *data = conn->data;
   struct ssl_connect_data *connssl = &conn->ssl[sockindex];
   curl_socket_t sockfd = conn->sock[sockindex];
-  long timeout_ms;
+  timediff_t timeout_ms;
   int what;
 
   /* check if the connection has already been established */
@@ -930,7 +933,7 @@ mbed_connect_common(struct connectdata *conn,
         connssl->connecting_state?sockfd:CURL_SOCKET_BAD;
 
       what = Curl_socket_check(readfd, CURL_SOCKET_BAD, writefd,
-                               nonblocking ? 0 : timeout_ms);
+                               nonblocking ? 0 : (time_t)timeout_ms);
       if(what < 0) {
         /* fatal error */
         failf(data, "select/poll on SSL socket, errno: %d", SOCKERRNO);
diff --git a/lib/vtls/mesalink.c b/lib/vtls/mesalink.c
index 9507888bd..cab1e390b 100644
--- a/lib/vtls/mesalink.c
+++ b/lib/vtls/mesalink.c
@@ -6,7 +6,7 @@
  *                             \___|\___/|_| \_\_____|
  *
  * Copyright (C) 2017 - 2018, Yiming Jing, <address@hidden>
- * Copyright (C) 1998 - 2018, Daniel Stenberg, <address@hidden>, et al.
+ * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
  *
  * This software is licensed as described in the file COPYING, which
  * you should have received as part of this distribution. The terms
@@ -494,7 +494,7 @@ mesalink_connect_common(struct connectdata *conn, int 
sockindex,
   struct Curl_easy *data = conn->data;
   struct ssl_connect_data *connssl = &conn->ssl[sockindex];
   curl_socket_t sockfd = conn->sock[sockindex];
-  time_t timeout_ms;
+  timediff_t timeout_ms;
   int what;
 
   /* check if the connection has already been established */
@@ -543,7 +543,8 @@ mesalink_connect_common(struct connectdata *conn, int 
sockindex,
                                : CURL_SOCKET_BAD;
 
       what = Curl_socket_check(
-        readfd, CURL_SOCKET_BAD, writefd, nonblocking ? 0 : timeout_ms);
+        readfd, CURL_SOCKET_BAD, writefd,
+        nonblocking ? 0 : (time_t)timeout_ms);
       if(what < 0) {
         /* fatal error */
         failf(data, "select/poll on SSL socket, errno: %d", SOCKERRNO);
diff --git a/lib/vtls/nss.c b/lib/vtls/nss.c
index 435f3e93a..a375f00da 100644
--- a/lib/vtls/nss.c
+++ b/lib/vtls/nss.c
@@ -2127,7 +2127,7 @@ static CURLcode nss_do_connect(struct connectdata *conn, 
int sockindex)
 
 
   /* check timeout situation */
-  const time_t time_left = Curl_timeleft(data, NULL, TRUE);
+  const timediff_t time_left = Curl_timeleft(data, NULL, TRUE);
   if(time_left < 0) {
     failf(data, "timed out before SSL handshake");
     result = CURLE_OPERATION_TIMEDOUT;
diff --git a/lib/vtls/openssl.c b/lib/vtls/openssl.c
index 385f28179..760758d23 100644
--- a/lib/vtls/openssl.c
+++ b/lib/vtls/openssl.c
@@ -44,6 +44,7 @@
 #include "strcase.h"
 #include "hostcheck.h"
 #include "multiif.h"
+#include "strerror.h"
 #include "curl_printf.h"
 #include <openssl/ssl.h>
 #include <openssl/rand.h>
@@ -2165,8 +2166,13 @@ set_ssl_version_min_max(SSL_CTX *ctx, struct connectdata 
*conn)
   long curl_ssl_version_max;
 
   /* convert cURL min SSL version option to OpenSSL constant */
+#if defined(OPENSSL_IS_BORINGSSL) || defined(LIBRESSL_VERSION_NUMBER)
+  uint16_t ossl_ssl_version_min = 0;
+  uint16_t ossl_ssl_version_max = 0;
+#else
   long ossl_ssl_version_min = 0;
   long ossl_ssl_version_max = 0;
+#endif
   switch(curl_ssl_version_min) {
     case CURL_SSLVERSION_TLSv1: /* TLS 1.x */
     case CURL_SSLVERSION_TLSv1_0:
@@ -2186,10 +2192,10 @@ set_ssl_version_min_max(SSL_CTX *ctx, struct 
connectdata *conn)
   }
 
   /* CURL_SSLVERSION_DEFAULT means that no option was selected.
-    We don't want to pass 0 to SSL_CTX_set_min_proto_version as
-    it would enable all versions down to the lowest supported by
-    the library.
-    So we skip this, and stay with the OS default
+     We don't want to pass 0 to SSL_CTX_set_min_proto_version as
+     it would enable all versions down to the lowest supported by
+     the library.
+     So we skip this, and stay with the OS default
   */
   if(curl_ssl_version_min != CURL_SSLVERSION_DEFAULT) {
     if(!SSL_CTX_set_min_proto_version(ctx, ossl_ssl_version_min)) {
@@ -3649,7 +3655,7 @@ static CURLcode ossl_connect_common(struct connectdata 
*conn,
   struct Curl_easy *data = conn->data;
   struct ssl_connect_data *connssl = &conn->ssl[sockindex];
   curl_socket_t sockfd = conn->sock[sockindex];
-  time_t timeout_ms;
+  timediff_t timeout_ms;
   int what;
 
   /* check if the connection has already been established */
@@ -3696,7 +3702,7 @@ static CURLcode ossl_connect_common(struct connectdata 
*conn,
         connssl->connecting_state?sockfd:CURL_SOCKET_BAD;
 
       what = Curl_socket_check(readfd, CURL_SOCKET_BAD, writefd,
-                               nonblocking?0:timeout_ms);
+                               nonblocking?0:(time_t)timeout_ms);
       if(what < 0) {
         /* fatal error */
         failf(data, "select/poll on SSL socket, errno: %d", SOCKERRNO);
@@ -3820,8 +3826,8 @@ static ssize_t ossl_send(struct connectdata *conn,
       *curlcode = CURLE_AGAIN;
       return -1;
     case SSL_ERROR_SYSCALL:
-      failf(conn->data, "SSL_write() returned SYSCALL, errno = %d",
-            SOCKERRNO);
+      Curl_strerror(SOCKERRNO, error_buffer, sizeof(error_buffer));
+      failf(conn->data, OSSL_PACKAGE " SSL_write: %s", error_buffer);
       *curlcode = CURLE_SEND_ERROR;
       return -1;
     case SSL_ERROR_SSL:
@@ -3878,13 +3884,21 @@ static ssize_t ossl_recv(struct connectdata *conn, /* 
connection data */
       break;
     case SSL_ERROR_ZERO_RETURN: /* no more data */
       /* close_notify alert */
-      connclose(conn, "TLS close_notify");
+      if(num == FIRSTSOCKET)
+        /* mark the connection for close if it is indeed the control
+           connection */
+        connclose(conn, "TLS close_notify");
       break;
     case SSL_ERROR_WANT_READ:
     case SSL_ERROR_WANT_WRITE:
       /* there's data pending, re-invoke SSL_read() */
       *curlcode = CURLE_AGAIN;
       return -1;
+    case SSL_ERROR_SYSCALL:
+      Curl_strerror(SOCKERRNO, error_buffer, sizeof(error_buffer));
+      failf(conn->data, OSSL_PACKAGE " SSL_read: %s", error_buffer);
+      *curlcode = CURLE_RECV_ERROR;
+      return -1;
     default:
       /* openssl/ssl.h for SSL_ERROR_SYSCALL says "look at error stack/return
          value/errno" */
diff --git a/lib/vtls/polarssl.c b/lib/vtls/polarssl.c
index 7ea26b442..9e7dd9043 100644
--- a/lib/vtls/polarssl.c
+++ b/lib/vtls/polarssl.c
@@ -734,7 +734,7 @@ polarssl_connect_common(struct connectdata *conn,
   struct Curl_easy *data = conn->data;
   struct ssl_connect_data *connssl = &conn->ssl[sockindex];
   curl_socket_t sockfd = conn->sock[sockindex];
-  long timeout_ms;
+  timediff_t timeout_ms;
   int what;
 
   /* check if the connection has already been established */
@@ -781,7 +781,7 @@ polarssl_connect_common(struct connectdata *conn,
         connssl->connecting_state?sockfd:CURL_SOCKET_BAD;
 
       what = Curl_socket_check(readfd, CURL_SOCKET_BAD, writefd,
-                               nonblocking?0:timeout_ms);
+                               nonblocking?0:(time_t)timeout_ms);
       if(what < 0) {
         /* fatal error */
         failf(data, "select/poll on SSL socket, errno: %d", SOCKERRNO);
diff --git a/lib/vtls/schannel.c b/lib/vtls/schannel.c
index 0f6f734fd..bbd2fe921 100644
--- a/lib/vtls/schannel.c
+++ b/lib/vtls/schannel.c
@@ -1181,6 +1181,7 @@ struct Adder_args
   struct connectdata *conn;
   CURLcode result;
   int idx;
+  int certs_count;
 };
 
 static bool
@@ -1191,7 +1192,9 @@ add_cert_to_certinfo(const CERT_CONTEXT *ccert_context, 
void *raw_arg)
   if(valid_cert_encoding(ccert_context)) {
     const char *beg = (const char *) ccert_context->pbCertEncoded;
     const char *end = beg + ccert_context->cbCertEncoded;
-    args->result = Curl_extract_certinfo(args->conn, (args->idx)++, beg, end);
+    int insert_index = (args->certs_count - 1) - args->idx;
+    args->result = Curl_extract_certinfo(args->conn, insert_index, beg, end);
+    args->idx++;
   }
   return args->result == CURLE_OK;
 }
@@ -1326,6 +1329,7 @@ schannel_connect_step3(struct connectdata *conn, int 
sockindex)
       struct Adder_args args;
       args.conn = conn;
       args.idx = 0;
+      args.certs_count = certs_count;
       traverse_cert_store(ccert_context, add_cert_to_certinfo, &args);
       result = args.result;
     }
@@ -1347,7 +1351,7 @@ schannel_connect_common(struct connectdata *conn, int 
sockindex,
   struct Curl_easy *data = conn->data;
   struct ssl_connect_data *connssl = &conn->ssl[sockindex];
   curl_socket_t sockfd = conn->sock[sockindex];
-  time_t timeout_ms;
+  timediff_t timeout_ms;
   int what;
 
   /* check if the connection has already been established */
@@ -1394,7 +1398,7 @@ schannel_connect_common(struct connectdata *conn, int 
sockindex,
         connssl->connecting_state ? sockfd : CURL_SOCKET_BAD;
 
       what = Curl_socket_check(readfd, CURL_SOCKET_BAD, writefd,
-                               nonblocking ? 0 : timeout_ms);
+                               nonblocking ? 0 : (time_t)timeout_ms);
       if(what < 0) {
         /* fatal error */
         failf(data, "select/poll on SSL/TLS socket, errno: %d", SOCKERRNO);
@@ -1544,7 +1548,7 @@ schannel_send(struct connectdata *conn, int sockindex,
     /* send entire message or fail */
     while(len > (size_t)written) {
       ssize_t this_write;
-      time_t timeleft;
+      timediff_t timeleft;
       int what;
 
       this_write = 0;
diff --git a/lib/vtls/schannel_verify.c b/lib/vtls/schannel_verify.c
index 5a09e969e..1bdf50a55 100644
--- a/lib/vtls/schannel_verify.c
+++ b/lib/vtls/schannel_verify.c
@@ -111,7 +111,7 @@ static CURLcode add_certs_to_store(HCERTSTORE trust_store,
    */
   ca_file_handle = CreateFile(ca_file_tstr,
                               GENERIC_READ,
-                              0,
+                              FILE_SHARE_READ,
                               NULL,
                               OPEN_EXISTING,
                               FILE_ATTRIBUTE_NORMAL,
diff --git a/lib/vtls/sectransp.c b/lib/vtls/sectransp.c
index 3fb125ab5..4eece89d5 100644
--- a/lib/vtls/sectransp.c
+++ b/lib/vtls/sectransp.c
@@ -79,7 +79,7 @@
 /* These macros mean "the following code is present to allow runtime backward
    compatibility with at least this cat or earlier":
    (You set this at build-time using the compiler command line option
-   "-mmacos-version-min.") */
+   "-mmacosx-version-min.") */
 #define CURL_SUPPORT_MAC_10_5 MAC_OS_X_VERSION_MIN_REQUIRED <= 1050
 #define CURL_SUPPORT_MAC_10_6 MAC_OS_X_VERSION_MIN_REQUIRED <= 1060
 #define CURL_SUPPORT_MAC_10_7 MAC_OS_X_VERSION_MIN_REQUIRED <= 1070
@@ -2805,7 +2805,7 @@ sectransp_connect_common(struct connectdata *conn,
   struct Curl_easy *data = conn->data;
   struct ssl_connect_data *connssl = &conn->ssl[sockindex];
   curl_socket_t sockfd = conn->sock[sockindex];
-  long timeout_ms;
+  timediff_t timeout_ms;
   int what;
 
   /* check if the connection has already been established */
@@ -2852,7 +2852,7 @@ sectransp_connect_common(struct connectdata *conn,
       connssl->connecting_state?sockfd:CURL_SOCKET_BAD;
 
       what = Curl_socket_check(readfd, CURL_SOCKET_BAD, writefd,
-                               nonblocking?0:timeout_ms);
+                               nonblocking?0:(time_t)timeout_ms);
       if(what < 0) {
         /* fatal error */
         failf(data, "select/poll on SSL socket, errno: %d", SOCKERRNO);
diff --git a/lib/vtls/vtls.c b/lib/vtls/vtls.c
index 422819899..e6d756225 100644
--- a/lib/vtls/vtls.c
+++ b/lib/vtls/vtls.c
@@ -97,7 +97,8 @@ Curl_ssl_config_matches(struct ssl_primary_config* data,
      Curl_safe_strcasecompare(data->random_file, needle->random_file) &&
      Curl_safe_strcasecompare(data->egdsocket, needle->egdsocket) &&
      Curl_safe_strcasecompare(data->cipher_list, needle->cipher_list) &&
-     Curl_safe_strcasecompare(data->cipher_list13, needle->cipher_list13))
+     Curl_safe_strcasecompare(data->cipher_list13, needle->cipher_list13) &&
+     Curl_safe_strcasecompare(data->pinned_key, needle->pinned_key))
     return TRUE;
 
   return FALSE;
@@ -121,6 +122,7 @@ Curl_clone_primary_ssl_config(struct ssl_primary_config 
*source,
   CLONE_STRING(egdsocket);
   CLONE_STRING(cipher_list);
   CLONE_STRING(cipher_list13);
+  CLONE_STRING(pinned_key);
 
   return TRUE;
 }
@@ -134,6 +136,7 @@ void Curl_free_primary_ssl_config(struct 
ssl_primary_config* sslc)
   Curl_safefree(sslc->egdsocket);
   Curl_safefree(sslc->cipher_list);
   Curl_safefree(sslc->cipher_list13);
+  Curl_safefree(sslc->pinned_key);
 }
 
 #ifdef USE_SSL
diff --git a/m4/curl-confopts.m4 b/m4/curl-confopts.m4
index 651334d74..dc826f00b 100644
--- a/m4/curl-confopts.m4
+++ b/m4/curl-confopts.m4
@@ -5,7 +5,7 @@
 #                            | (__| |_| |  _ <| |___
 #                             \___|\___/|_| \_\_____|
 #
-# Copyright (C) 1998 - 2018, Daniel Stenberg, <address@hidden>, et al.
+# Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
 #
 # This software is licensed as described in the file COPYING, which
 # you should have received as part of this distribution. The terms
@@ -650,3 +650,39 @@ AC_DEFUN([CURL_CHECK_NTLM_WB], [
     NTLM_WB_ENABLED=1
   fi
 ])
+
+dnl CURL_CHECK_OPTION_ESNI
+dnl -----------------------------------------------------
+dnl Verify whether configure has been invoked with option
+dnl --enable-esni or --disable-esni, and set
+dnl shell variable want_esni as appropriate.
+
+AC_DEFUN([CURL_CHECK_OPTION_ESNI], [
+  AC_MSG_CHECKING([whether to enable ESNI support])
+  OPT_ESNI="default"
+  AC_ARG_ENABLE(esni,
+AC_HELP_STRING([--enable-esni],[Enable ESNI support])
+AC_HELP_STRING([--disable-esni],[Disable ESNI support]),
+  OPT_ESNI=$enableval)
+  case "$OPT_ESNI" in
+    no)
+      dnl --disable-esni option used
+      want_esni="no"
+      curl_esni_msg="no      (--enable-esni)"
+      AC_MSG_RESULT([no])
+      ;;
+    default)
+      dnl configure option not specified
+      want_esni="no"
+      curl_esni_msg="no      (--enable-esni)"
+      AC_MSG_RESULT([no])
+      ;;
+    *)
+      dnl --enable-esni option used
+      want_esni="yes"
+      curl_esni_msg="enabled (--disable-esni)"
+      experimental="esni"
+      AC_MSG_RESULT([yes])
+      ;;
+  esac
+])
diff --git a/packages/OS400/curl.inc.in b/packages/OS400/curl.inc.in
index 5a53b1b21..8be6c8986 100644
--- a/packages/OS400/curl.inc.in
+++ b/packages/OS400/curl.inc.in
@@ -1841,6 +1841,8 @@
      d                 c                   20014
      d  CURLMOPT_PUSHDATA...
      d                 c                   10015
+     d  CURLMOPT_MAX_CONCURRENT_STREAMS...
+     d                 c                   10016
       *
       * Bitmask bits for CURLMOPT_PIPELING.
       *
diff --git a/packages/OS400/os400sys.c b/packages/OS400/os400sys.c
index 85dd20e40..445141eec 100644
--- a/packages/OS400/os400sys.c
+++ b/packages/OS400/os400sys.c
@@ -5,7 +5,7 @@
  *                            | (__| |_| |  _ <| |___
  *                             \___|\___/|_| \_\_____|
  *
- * Copyright (C) 1998 - 2017, Daniel Stenberg, <address@hidden>, et al.
+ * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
  *
  * This software is licensed as described in the file COPYING, which
  * you should have received as part of this distribution. The terms
@@ -195,7 +195,7 @@ buffer_threaded(localkey_t key, long size)
 
     /* Allocate buffer descriptors for the current thread. */
 
-    bufs = calloc((size_t) LK_LAST, sizeof *bufs);
+    bufs = calloc((size_t) LK_LAST, sizeof(*bufs));
     if(!bufs)
       return (char *) NULL;
 
@@ -224,7 +224,7 @@ buffer_undef(localkey_t key, long size)
   if(Curl_thread_buffer == buffer_undef) {      /* If unchanged during lock. */
     if(!pthread_key_create(&thdkey, thdbufdestroy))
       Curl_thread_buffer = buffer_threaded;
-    else if(!(locbufs = calloc((size_t) LK_LAST, sizeof *locbufs))) {
+    else if(!(locbufs = calloc((size_t) LK_LAST, sizeof(*locbufs)))) {
       pthread_mutex_unlock(&mutex);
       return (char *) NULL;
       }
@@ -390,7 +390,7 @@ Curl_gsk_environment_open(gsk_handle * my_env_handle)
 
   if(!my_env_handle)
     return GSK_OS400_ERROR_INVALID_POINTER;
-  p = (struct Curl_gsk_descriptor *) malloc(sizeof *p);
+  p = (struct Curl_gsk_descriptor *) malloc(sizeof(*p));
   if(!p)
     return GSK_INSUFFICIENT_STORAGE;
   p->strlist = (struct gskstrlist *) NULL;
@@ -417,7 +417,7 @@ Curl_gsk_secure_soc_open(gsk_handle my_env_handle,
   if(!my_session_handle)
     return GSK_OS400_ERROR_INVALID_POINTER;
   h = ((struct Curl_gsk_descriptor *) my_env_handle)->h;
-  p = (struct Curl_gsk_descriptor *) malloc(sizeof *p);
+  p = (struct Curl_gsk_descriptor *) malloc(sizeof(*p));
   if(!p)
     return GSK_INSUFFICIENT_STORAGE;
   p->strlist = (struct gskstrlist *) NULL;
@@ -598,7 +598,7 @@ cachestring(struct Curl_gsk_descriptor * p,
     if(sp->ebcdicstr == ebcdicbuf)
       break;
   if(!sp) {
-    sp = (struct gskstrlist *) malloc(sizeof *sp);
+    sp = (struct gskstrlist *) malloc(sizeof(*sp));
     if(!sp)
       return GSK_INSUFFICIENT_STORAGE;
     asciibuf = malloc(bufsize + 1);
@@ -800,7 +800,7 @@ Curl_gss_import_name_a(OM_uint32 * minor_status, 
gss_buffer_t in_name,
   if(!in_name || !in_name->value || !in_name->length)
     return gss_import_name(minor_status, in_name, in_name_type, out_name);
 
-  memcpy((char *) &in, (char *) in_name, sizeof in);
+  memcpy((char *) &in, (char *) in_name, sizeof(in));
   i = in.length;
 
   in.value = malloc(i + 1);
@@ -1048,7 +1048,7 @@ Curl_ldap_search_s_a(void * ld, char * base, int scope, 
char * filter,
     for(i = 0; attrs[i++];)
       ;
 
-    eattrs = calloc(i, sizeof *eattrs);
+    eattrs = calloc(i, sizeof(*eattrs));
     if(!eattrs)
       status = LDAP_NO_MEMORY;
     else {
@@ -1227,19 +1227,18 @@ Curl_ldap_next_attribute_a(void * ld,
 
 
 static int
-convert_sockaddr(struct sockaddr_storage * dstaddr,
-                                const struct sockaddr * srcaddr, int srclen)
-
+sockaddr2ebcdic(struct sockaddr_storage *dstaddr,
+                const struct sockaddr *srcaddr, int srclen)
 {
-  const struct sockaddr_un * srcu;
-  struct sockaddr_un * dstu;
+  const struct sockaddr_un *srcu;
+  struct sockaddr_un *dstu;
   unsigned int i;
   unsigned int dstsize;
 
-  /* Convert a socket address into job CCSID, if needed. */
+  /* Convert a socket address to job CCSID, if needed. */
 
   if(!srcaddr || srclen < offsetof(struct sockaddr, sa_family) +
-     sizeof srcaddr->sa_family || srclen > sizeof *dstaddr) {
+     sizeof(srcaddr->sa_family) || srclen > sizeof(*dstaddr)) {
     errno = EINVAL;
     return -1;
     }
@@ -1251,26 +1250,67 @@ convert_sockaddr(struct sockaddr_storage * dstaddr,
   case AF_UNIX:
     srcu = (const struct sockaddr_un *) srcaddr;
     dstu = (struct sockaddr_un *) dstaddr;
-    dstsize = sizeof *dstaddr - offsetof(struct sockaddr_un, sun_path);
+    dstsize = sizeof(*dstaddr) - offsetof(struct sockaddr_un, sun_path);
     srclen -= offsetof(struct sockaddr_un, sun_path);
     i = QadrtConvertA2E(dstu->sun_path, srcu->sun_path, dstsize - 1, srclen);
     dstu->sun_path[i] = '\0';
-    i += offsetof(struct sockaddr_un, sun_path);
-    srclen = i;
+    srclen = i + offsetof(struct sockaddr_un, sun_path);
+    }
+
+  return srclen;
+}
+
+
+static int
+sockaddr2ascii(struct sockaddr *dstaddr, int dstlen,
+               const struct sockaddr_storage *srcaddr, int srclen)
+{
+  const struct sockaddr_un *srcu;
+  struct sockaddr_un *dstu;
+  unsigned int dstsize;
+
+  /* Convert a socket address to ASCII, if needed. */
+
+  if(!srclen)
+    return 0;
+  if(srclen > dstlen)
+    srclen = dstlen;
+  if(!srcaddr || srclen < 0) {
+    errno = EINVAL;
+    return -1;
     }
 
+  memcpy((char *) dstaddr, (char *) srcaddr, srclen);
+
+  if(srclen >= offsetof(struct sockaddr_storage, ss_family) +
+     sizeof(srcaddr->ss_family)) {
+    switch (srcaddr->ss_family) {
+
+    case AF_UNIX:
+      srcu = (const struct sockaddr_un *) srcaddr;
+      dstu = (struct sockaddr_un *) dstaddr;
+      dstsize = dstlen - offsetof(struct sockaddr_un, sun_path);
+      srclen -= offsetof(struct sockaddr_un, sun_path);
+      if(dstsize > 0 && srclen > 0) {
+        srclen = QadrtConvertE2A(dstu->sun_path, srcu->sun_path,
+                                 dstsize - 1, srclen);
+        dstu->sun_path[srclen] = '\0';
+      }
+      srclen += offsetof(struct sockaddr_un, sun_path);
+    }
+  }
+
   return srclen;
 }
 
 
 int
 Curl_os400_connect(int sd, struct sockaddr * destaddr, int addrlen)
-
 {
   int i;
   struct sockaddr_storage laddr;
 
-  i = convert_sockaddr(&laddr, destaddr, addrlen);
+  i = sockaddr2ebcdic(&laddr, destaddr, addrlen);
 
   if(i < 0)
     return -1;
@@ -1281,12 +1321,11 @@ Curl_os400_connect(int sd, struct sockaddr * destaddr, 
int addrlen)
 
 int
 Curl_os400_bind(int sd, struct sockaddr * localaddr, int addrlen)
-
 {
   int i;
   struct sockaddr_storage laddr;
 
-  i = convert_sockaddr(&laddr, localaddr, addrlen);
+  i = sockaddr2ebcdic(&laddr, localaddr, addrlen);
 
   if(i < 0)
     return -1;
@@ -1298,12 +1337,11 @@ Curl_os400_bind(int sd, struct sockaddr * localaddr, 
int addrlen)
 int
 Curl_os400_sendto(int sd, char * buffer, int buflen, int flags,
                                 struct sockaddr * dstaddr, int addrlen)
-
 {
   int i;
   struct sockaddr_storage laddr;
 
-  i = convert_sockaddr(&laddr, dstaddr, addrlen);
+  i = sockaddr2ebcdic(&laddr, dstaddr, addrlen);
 
   if(i < 0)
     return -1;
@@ -1315,19 +1353,14 @@ Curl_os400_sendto(int sd, char * buffer, int buflen, 
int flags,
 int
 Curl_os400_recvfrom(int sd, char * buffer, int buflen, int flags,
                                 struct sockaddr * fromaddr, int * addrlen)
-
 {
-  int i;
   int rcvlen;
-  int laddrlen;
-  const struct sockaddr_un * srcu;
-  struct sockaddr_un * dstu;
   struct sockaddr_storage laddr;
+  int laddrlen = sizeof(laddr);
 
   if(!fromaddr || !addrlen || *addrlen <= 0)
     return recvfrom(sd, buffer, buflen, flags, fromaddr, addrlen);
 
-  laddrlen = sizeof laddr;
   laddr.ss_family = AF_UNSPEC;          /* To detect if unused. */
   rcvlen = recvfrom(sd, buffer, buflen, flags,
                     (struct sockaddr *) &laddr, &laddrlen);
@@ -1335,36 +1368,51 @@ Curl_os400_recvfrom(int sd, char * buffer, int buflen, 
int flags,
   if(rcvlen < 0)
     return rcvlen;
 
-  switch (laddr.ss_family) {
-
-  case AF_UNIX:
-    srcu = (const struct sockaddr_un *) &laddr;
-    dstu = (struct sockaddr_un *) fromaddr;
-    i = *addrlen - offsetof(struct sockaddr_un, sun_path);
-    laddrlen -= offsetof(struct sockaddr_un, sun_path);
-    i = QadrtConvertE2A(dstu->sun_path, srcu->sun_path, i, laddrlen);
-    laddrlen = i + offsetof(struct sockaddr_un, sun_path);
-
-    if(laddrlen < *addrlen)
-      dstu->sun_path[i] = '\0';
+  if(laddr.ss_family == AF_UNSPEC)
+    laddrlen = 0;
+  else {
+    laddrlen = sockaddr2ascii(fromaddr, *addrlen, &laddr, laddrlen);
+    if(laddrlen < 0)
+      return laddrlen;
+  }
+  *addrlen = laddrlen;
+  return rcvlen;
+}
 
-    break;
 
-  case AF_UNSPEC:
-    break;
+int
+Curl_os400_getpeername(int sd, struct sockaddr *addr, int *addrlen)
+{
+  struct sockaddr_storage laddr;
+  int laddrlen = sizeof(laddr);
+  int retcode = getpeername(sd, (struct sockaddr *) &laddr, &laddrlen);
+
+  if(!retcode) {
+    laddrlen = sockaddr2ascii(addr, *addrlen, &laddr, laddrlen);
+    if(laddrlen < 0)
+      return laddrlen;
+    *addrlen = laddrlen;
+  }
 
-  default:
-    if(laddrlen > *addrlen)
-      laddrlen = *addrlen;
+  return retcode;
+}
 
-    if(laddrlen)
-      memcpy((char *) fromaddr, (char *) &laddr, laddrlen);
 
-    break;
-    }
+int
+Curl_os400_getsockname(int sd, struct sockaddr *addr, int *addrlen)
+{
+  struct sockaddr_storage laddr;
+  int laddrlen = sizeof(laddr);
+  int retcode = getsockname(sd, (struct sockaddr *) &laddr, &laddrlen);
+
+  if(!retcode) {
+    laddrlen = sockaddr2ascii(addr, *addrlen, &laddr, laddrlen);
+    if(laddrlen < 0)
+      return laddrlen;
+    *addrlen = laddrlen;
+  }
 
-  *addrlen = laddrlen;
-  return rcvlen;
+  return retcode;
 }
 
 
diff --git a/scripts/delta b/scripts/delta
index 81de75338..3d3ecc13f 100755
--- a/scripts/delta
+++ b/scripts/delta
@@ -53,8 +53,8 @@ $bcontribs = `git show $start:docs/THANKS | grep -c '^[^ ]'`;
 $contribs = $acontribs - $bcontribs;
 
 # number of setops:
-$asetopts=`grep "^  CINIT" include/curl/curl.h  | grep -cv OBSOLETE`;
-$bsetopts=`git show $start:include/curl/curl.h | grep "^  CINIT" | grep -cv 
OBSOLETE`;
+$asetopts=`grep "^  CINIT" include/gnurl/curl.h  | grep -cv OBSOLETE`;
+$bsetopts=`git show $start:include/gnurl/curl.h | grep "^  CINIT" | grep -cv 
OBSOLETE`;
 $nsetopts = $asetopts - $bsetopts;
 
 # Number of command line options:
diff --git a/src/tool_cfgable.h b/src/tool_cfgable.h
index ff80f8eb8..7232c35e3 100644
--- a/src/tool_cfgable.h
+++ b/src/tool_cfgable.h
@@ -22,11 +22,9 @@
  *
  ***************************************************************************/
 #include "tool_setup.h"
-
 #include "tool_sdecls.h"
-
 #include "tool_metalink.h"
-
+#include "tool_urlglob.h"
 #include "tool_formparse.h"
 
 typedef enum {
@@ -37,6 +35,20 @@ typedef enum {
 
 struct GlobalConfig;
 
+struct State {
+  struct getout *urlnode;
+  URLGlob *inglob;
+  URLGlob *urls;
+  char *outfiles;
+  char *httpgetfields;
+  char *uploadfile;
+  unsigned long infilenum; /* number of files to upload */
+  unsigned long up;  /* upload file counter within a single upload glob */
+  unsigned long urlnum; /* how many iterations this single URL has with ranges
+                           etc */
+  unsigned long li;
+};
+
 struct OperationConfig {
   bool remote_time;
   char *random_file;
@@ -262,6 +274,7 @@ struct OperationConfig {
   struct GlobalConfig *global;
   struct OperationConfig *prev;
   struct OperationConfig *next;   /* Always last in the struct */
+  struct State state;             /* for create_transfer() */
 };
 
 struct GlobalConfig {
diff --git a/src/tool_getparam.c b/src/tool_getparam.c
index 2c1868383..3882cb97e 100644
--- a/src/tool_getparam.c
+++ b/src/tool_getparam.c
@@ -243,7 +243,7 @@ static const struct LongShort aliases[]= {
   {"El", "tlspassword",              ARG_STRING},
   {"Em", "tlsauthtype",              ARG_STRING},
   {"En", "ssl-allow-beast",          ARG_BOOL},
-  {"Eo", "login-options",            ARG_STRING},
+  /* Eo */
   {"Ep", "pinnedpubkey",             ARG_STRING},
   {"EP", "proxy-pinnedpubkey",       ARG_STRING},
   {"Eq", "cert-status",              ARG_BOOL},
@@ -322,6 +322,7 @@ static const struct LongShort aliases[]= {
   {"Z",  "parallel",                 ARG_BOOL},
   {"Zb", "parallel-max",             ARG_STRING},
   {"#",  "progress-bar",             ARG_BOOL},
+  {"#m", "progress-meter",           ARG_BOOL},
   {":",  "next",                     ARG_NONE},
 };
 
@@ -1172,11 +1173,16 @@ ParameterError getparameter(const char *flag, /* f or 
-long-flag */
         break;
       }
       break;
-    case '#': /* --progress-bar */
-      if(toggle)
-        global->progressmode = CURL_PROGRESS_BAR;
-      else
-        global->progressmode = CURL_PROGRESS_STATS;
+    case '#':
+      switch(subletter) {
+      case 'm': /* --progress-meter */
+        global->noprogress = !toggle;
+        break;
+      default:  /* --progress-bar */
+        global->progressmode =
+          toggle ? CURL_PROGRESS_BAR : CURL_PROGRESS_STATS;
+        break;
+      }
       break;
     case ':': /* --next */
       return PARAM_NEXT_OPERATION;
@@ -1569,10 +1575,6 @@ ParameterError getparameter(const char *flag, /* f or 
-long-flag */
           config->ssl_allow_beast = toggle;
         break;
 
-      case 'o': /* --login-options */
-        GetStr(&config->login_options, nextarg);
-        break;
-
       case 'p': /* Pinned public key DER file */
         GetStr(&config->pinnedpubkey, nextarg);
         break;
diff --git a/src/tool_help.c b/src/tool_help.c
index 271439053..022956676 100644
--- a/src/tool_help.c
+++ b/src/tool_help.c
@@ -263,6 +263,8 @@ static const struct helptxt helptext[] = {
    "Disable TCP keepalive on the connection"},
   {"    --no-npn",
    "Disable the NPN TLS extension"},
+  {"    --no-progress-meter",
+   "Do not show the progress meter"},
   {"    --no-sessionid",
    "Disable SSL session-ID reusing"},
   {"    --noproxy <no-proxy-list>",
@@ -540,6 +542,7 @@ static const struct feat feats[] = {
   {"MultiSSL",       CURL_VERSION_MULTI_SSL},
   {"PSL",            CURL_VERSION_PSL},
   {"alt-svc",        CURL_VERSION_ALTSVC},
+  {"ESNI",           CURL_VERSION_ESNI},
 };
 
 void tool_help(void)
diff --git a/src/tool_metalink.c b/src/tool_metalink.c
index 0740407f9..889da4bff 100644
--- a/src/tool_metalink.c
+++ b/src/tool_metalink.c
@@ -965,7 +965,7 @@ static void delete_metalink_resource(metalink_resource *res)
   Curl_safefree(res);
 }
 
-static void delete_metalinkfile(metalinkfile *mlfile)
+void delete_metalinkfile(metalinkfile *mlfile)
 {
   metalink_resource *res;
   if(mlfile == NULL) {
@@ -984,12 +984,14 @@ static void delete_metalinkfile(metalinkfile *mlfile)
 
 void clean_metalink(struct OperationConfig *config)
 {
-  while(config->metalinkfile_list) {
-    metalinkfile *mlfile = config->metalinkfile_list;
-    config->metalinkfile_list = config->metalinkfile_list->next;
-    delete_metalinkfile(mlfile);
+  if(config) {
+    while(config->metalinkfile_list) {
+      metalinkfile *mlfile = config->metalinkfile_list;
+      config->metalinkfile_list = config->metalinkfile_list->next;
+      delete_metalinkfile(mlfile);
+    }
+    config->metalinkfile_last = 0;
   }
-  config->metalinkfile_last = 0;
 }
 
 void metalink_cleanup(void)
diff --git a/src/tool_metalink.h b/src/tool_metalink.h
index 1e367033c..f5ec306f7 100644
--- a/src/tool_metalink.h
+++ b/src/tool_metalink.h
@@ -105,6 +105,8 @@ extern const digest_params SHA256_DIGEST_PARAMS[1];
  * Counts the resource in the metalinkfile.
  */
 int count_next_metalink_resource(metalinkfile *mlfile);
+
+void delete_metalinkfile(metalinkfile *mlfile);
 void clean_metalink(struct OperationConfig *config);
 
 /*
@@ -158,6 +160,7 @@ void metalink_cleanup(void);
 #else /* USE_METALINK */
 
 #define count_next_metalink_resource(x)  0
+#define delete_metalinkfile(x)  (void)x
 #define clean_metalink(x)  (void)x
 
 /* metalink_cleanup() takes no arguments */
diff --git a/src/tool_operate.c b/src/tool_operate.c
index d2ad9642d..3087d2d14 100644
--- a/src/tool_operate.c
+++ b/src/tool_operate.c
@@ -103,10 +103,14 @@ CURLcode curl_easy_perform_ev(CURL *easy);
   "this situation and\nhow to fix it, please visit the web page mentioned " \
   "above.\n"
 
-static CURLcode create_transfers(struct GlobalConfig *global,
-                                 struct OperationConfig *config,
-                                 CURLSH *share,
-                                 bool capath_from_env);
+static CURLcode single_transfer(struct GlobalConfig *global,
+                                struct OperationConfig *config,
+                                CURLSH *share,
+                                bool capath_from_env,
+                                bool *added);
+static CURLcode create_transfer(struct GlobalConfig *global,
+                                CURLSH *share,
+                                bool *added);
 
 static bool is_fatal_error(CURLcode code)
 {
@@ -200,7 +204,9 @@ static curl_off_t VmsSpecialSize(const char *name,
 struct per_transfer *transfers; /* first node */
 static struct per_transfer *transfersl; /* last node */
 
-static CURLcode add_transfer(struct per_transfer **per)
+/* add_per_transfer creates a new 'per_transfer' node in the linked
+   list of transfers */
+static CURLcode add_per_transfer(struct per_transfer **per)
 {
   struct per_transfer *p;
   p = calloc(sizeof(struct per_transfer), 1);
@@ -224,7 +230,7 @@ static CURLcode add_transfer(struct per_transfer **per)
 
 /* Remove the specified transfer from the list (and free it), return the next
    in line */
-static struct per_transfer *del_transfer(struct per_transfer *per)
+static struct per_transfer *del_per_transfer(struct per_transfer *per)
 {
   struct per_transfer *n;
   struct per_transfer *p;
@@ -316,23 +322,24 @@ static CURLcode pre_transfer(struct GlobalConfig *global,
       my_setopt(per->curl, CURLOPT_INFILESIZE_LARGE, uploadfilesize);
     per->input.fd = per->infd;
   }
-  show_error:
   return result;
 }
 
 /*
  * Call this after a transfer has completed.
  */
-static CURLcode post_transfer(struct GlobalConfig *global,
-                              CURLSH *share,
-                              struct per_transfer *per,
-                              CURLcode result,
-                              bool *retryp)
+static CURLcode post_per_transfer(struct GlobalConfig *global,
+                                  struct per_transfer *per,
+                                  CURLcode result,
+                                  bool *retryp)
 {
   struct OutStruct *outs = &per->outs;
   CURL *curl = per->curl;
   struct OperationConfig *config = per->config;
 
+  if(!curl || !config)
+    return result;
+
   *retryp = FALSE;
 
   if(per->infdopen)
@@ -401,7 +408,6 @@ static CURLcode post_transfer(struct GlobalConfig *global,
     else if(rv == -1)
       fprintf(config->global->errors, "Metalink: parsing (%s) FAILED\n",
               per->this_url);
-    result = create_transfers(global, config, share, FALSE);
   }
   else if(per->metalink && result == CURLE_OK && !per->metalink_next_res) {
     int rv;
@@ -410,8 +416,6 @@ static CURLcode post_transfer(struct GlobalConfig *global,
     if(!rv)
       per->metalink_next_res = 1;
   }
-#else
-  (void)share;
 #endif /* USE_METALINK */
 
 #ifdef USE_METALINK
@@ -464,6 +468,7 @@ static CURLcode post_transfer(struct GlobalConfig *global,
         curl_easy_getinfo(curl, CURLINFO_RESPONSE_CODE, &response);
 
         switch(response) {
+        case 429: /* Too Many Requests (RFC6585) */
         case 500: /* Internal Server Error */
         case 502: /* Bad Gateway */
         case 503: /* Service Unavailable */
@@ -524,7 +529,7 @@ static CURLcode post_transfer(struct GlobalConfig *global,
       warnf(config->global, "Transient problem: %s "
             "Will retry in %ld seconds. "
             "%ld retries left.\n",
-            m[retry], per->retry_sleep/1000L, per->retry_numretries);
+            m[retry], sleeptime/1000L, per->retry_numretries);
 
       per->retry_numretries--;
       tool_go_sleep(sleeptime);
@@ -651,60 +656,81 @@ static CURLcode post_transfer(struct GlobalConfig *global,
   return CURLE_OK;
 }
 
-/* go through the list of URLs and configs and add transfers */
+static void single_transfer_cleanup(struct OperationConfig *config)
+{
+  if(config) {
+    struct State *state = &config->state;
+    if(state->urls) {
+      /* Free list of remaining URLs */
+      glob_cleanup(state->urls);
+      state->urls = NULL;
+    }
+    Curl_safefree(state->outfiles);
+    Curl_safefree(state->httpgetfields);
+    Curl_safefree(state->uploadfile);
+    if(state->inglob) {
+      /* Free list of globbed upload files */
+      glob_cleanup(state->inglob);
+      state->inglob = NULL;
+    }
+  }
+}
+
+/* create the next (singular) transfer */
 
-static CURLcode create_transfers(struct GlobalConfig *global,
-                                 struct OperationConfig *config,
-                                 CURLSH *share,
-                                 bool capath_from_env)
+static CURLcode single_transfer(struct GlobalConfig *global,
+                                struct OperationConfig *config,
+                                CURLSH *share,
+                                bool capath_from_env,
+                                bool *added)
 {
   CURLcode result = CURLE_OK;
   struct getout *urlnode;
   metalinkfile *mlfile_last = NULL;
   bool orig_noprogress = global->noprogress;
   bool orig_isatty = global->isatty;
-  char *httpgetfields = NULL;
+  struct State *state = &config->state;
+  char *httpgetfields = state->httpgetfields;
+  *added = FALSE; /* not yet */
 
   if(config->postfields) {
     if(config->use_httpget) {
-      /* Use the postfields data for a http get */
-      httpgetfields = strdup(config->postfields);
-      Curl_safefree(config->postfields);
       if(!httpgetfields) {
-        helpf(global->errors, "out of memory\n");
-        result = CURLE_OUT_OF_MEMORY;
-        goto quit_curl;
-      }
-      if(SetHTTPrequest(config,
-                        (config->no_body?HTTPREQ_HEAD:HTTPREQ_GET),
-                        &config->httpreq)) {
-        result = CURLE_FAILED_INIT;
-        goto quit_curl;
+        /* Use the postfields data for a http get */
+        httpgetfields = state->httpgetfields = strdup(config->postfields);
+        Curl_safefree(config->postfields);
+        if(!httpgetfields) {
+          helpf(global->errors, "out of memory\n");
+          result = CURLE_OUT_OF_MEMORY;
+        }
+        else if(SetHTTPrequest(config,
+                               (config->no_body?HTTPREQ_HEAD:HTTPREQ_GET),
+                               &config->httpreq)) {
+          result = CURLE_FAILED_INIT;
+        }
       }
     }
     else {
-      if(SetHTTPrequest(config, HTTPREQ_SIMPLEPOST, &config->httpreq)) {
+      if(SetHTTPrequest(config, HTTPREQ_SIMPLEPOST, &config->httpreq))
         result = CURLE_FAILED_INIT;
-        goto quit_curl;
-      }
     }
+    if(result)
+      return result;
+  }
+  if(!state->urlnode) {
+    /* first time caller, setup things */
+    state->urlnode = config->url_list;
+    state->infilenum = 1;
   }
 
-  for(urlnode = config->url_list; urlnode; urlnode = urlnode->next) {
-    unsigned long li;
-    unsigned long up; /* upload file counter within a single upload glob */
+  while(config->state.urlnode) {
     char *infiles; /* might be a glob pattern */
-    char *outfiles;
-    unsigned long infilenum;
-    URLGlob *inglob;
+    URLGlob *inglob = state->inglob;
     bool metalink = FALSE; /* metalink download? */
     metalinkfile *mlfile;
     metalink_resource *mlres;
 
-    outfiles = NULL;
-    infilenum = 1;
-    inglob = NULL;
-
+    urlnode = config->state.urlnode;
     if(urlnode->flags & GETOUT_METALINK) {
       metalink = 1;
       if(mlfile_last == NULL) {
@@ -727,13 +753,15 @@ static CURLcode create_transfers(struct GlobalConfig 
*global,
       Curl_safefree(urlnode->outfile);
       Curl_safefree(urlnode->infile);
       urlnode->flags = 0;
+      config->state.urlnode = urlnode->next;
+      state->up = 0;
       continue; /* next URL please */
     }
 
     /* save outfile pattern before expansion */
-    if(urlnode->outfile) {
-      outfiles = strdup(urlnode->outfile);
-      if(!outfiles) {
+    if(urlnode->outfile && !state->outfiles) {
+      state->outfiles = strdup(urlnode->outfile);
+      if(!state->outfiles) {
         helpf(global->errors, "out of memory\n");
         result = CURLE_OUT_OF_MEMORY;
         break;
@@ -742,88 +770,95 @@ static CURLcode create_transfers(struct GlobalConfig 
*global,
 
     infiles = urlnode->infile;
 
-    if(!config->globoff && infiles) {
+    if(!config->globoff && infiles && !inglob) {
       /* Unless explicitly shut off */
-      result = glob_url(&inglob, infiles, &infilenum,
+      result = glob_url(&inglob, infiles, &state->infilenum,
                         global->showerror?global->errors:NULL);
-      if(result) {
-        Curl_safefree(outfiles);
+      if(result)
         break;
-      }
+      config->state.inglob = inglob;
     }
 
-    /* Here's the loop for uploading multiple files within the same
-       single globbed string. If no upload, we enter the loop once anyway. */
-    for(up = 0 ; up < infilenum; up++) {
-
-      char *uploadfile; /* a single file, never a glob */
+    {
       int separator;
-      URLGlob *urls;
       unsigned long urlnum;
 
-      uploadfile = NULL;
-      urls = NULL;
-      urlnum = 0;
-
-      if(!up && !infiles)
+      if(!state->up && !infiles)
         Curl_nop_stmt;
       else {
-        if(inglob) {
-          result = glob_next_url(&uploadfile, inglob);
-          if(result == CURLE_OUT_OF_MEMORY)
-            helpf(global->errors, "out of memory\n");
-        }
-        else if(!up) {
-          uploadfile = strdup(infiles);
-          if(!uploadfile) {
-            helpf(global->errors, "out of memory\n");
-            result = CURLE_OUT_OF_MEMORY;
+        if(!state->uploadfile) {
+          if(inglob) {
+            result = glob_next_url(&state->uploadfile, inglob);
+            if(result == CURLE_OUT_OF_MEMORY)
+              helpf(global->errors, "out of memory\n");
+          }
+          else if(!state->up) {
+            state->uploadfile = strdup(infiles);
+            if(!state->uploadfile) {
+              helpf(global->errors, "out of memory\n");
+              result = CURLE_OUT_OF_MEMORY;
+            }
           }
         }
-        if(!uploadfile)
+        if(result)
           break;
       }
 
-      if(metalink) {
-        /* For Metalink download, we don't use glob. Instead we use
-           the number of resources as urlnum. */
-        urlnum = count_next_metalink_resource(mlfile);
-      }
-      else if(!config->globoff) {
-        /* Unless explicitly shut off, we expand '{...}' and '[...]'
-           expressions and return total number of URLs in pattern set */
-        result = glob_url(&urls, urlnode->url, &urlnum,
-                          global->showerror?global->errors:NULL);
-        if(result) {
-          Curl_safefree(uploadfile);
-          break;
+      if(!state->urlnum) {
+        if(metalink) {
+          /* For Metalink download, we don't use glob. Instead we use
+             the number of resources as urlnum. */
+          urlnum = count_next_metalink_resource(mlfile);
         }
+        else if(!config->globoff) {
+          /* Unless explicitly shut off, we expand '{...}' and '[...]'
+             expressions and return total number of URLs in pattern set */
+          result = glob_url(&state->urls, urlnode->url, &state->urlnum,
+                            global->showerror?global->errors:NULL);
+          if(result)
+            break;
+          urlnum = state->urlnum;
+        }
+        else
+          urlnum = 1; /* without globbing, this is a single URL */
       }
       else
-        urlnum = 1; /* without globbing, this is a single URL */
+        urlnum = state->urlnum;
 
       /* if multiple files extracted to stdout, insert separators! */
-      separator = ((!outfiles || !strcmp(outfiles, "-")) && urlnum > 1);
+      separator = ((!state->outfiles ||
+                    !strcmp(state->outfiles, "-")) && urlnum > 1);
 
       /* Here's looping around each globbed URL */
-      for(li = 0 ; li < urlnum; li++) {
+
+      if(state->li >= urlnum) {
+        state->li = 0;
+        state->up++;
+      }
+      if(state->up < state->infilenum) {
         struct per_transfer *per;
         struct OutStruct *outs;
         struct InStruct *input;
         struct OutStruct *heads;
         struct HdrCbData *hdrcbdata = NULL;
         CURL *curl = curl_easy_init();
-
-        result = add_transfer(&per);
+        result = add_per_transfer(&per);
         if(result || !curl) {
-          free(uploadfile);
           curl_easy_cleanup(curl);
           result = CURLE_OUT_OF_MEMORY;
-          goto show_error;
+          break;
+        }
+        if(state->uploadfile) {
+          per->uploadfile = strdup(state->uploadfile);
+          if(!per->uploadfile) {
+            curl_easy_cleanup(curl);
+            result = CURLE_OUT_OF_MEMORY;
+            break;
+          }
         }
+        *added = TRUE;
         per->config = config;
         per->curl = curl;
-        per->uploadfile = uploadfile;
 
         /* default headers output stream is stdout */
         heads = &per->heads;
@@ -838,7 +873,7 @@ static CURLcode create_transfers(struct GlobalConfig 
*global,
             if(!newfile) {
               warnf(config->global, "Failed to open %s\n", config->headerfile);
               result = CURLE_WRITE_ERROR;
-              goto quit_curl;
+              break;
             }
             else {
               heads->filename = config->headerfile;
@@ -873,26 +908,26 @@ static CURLcode create_transfers(struct GlobalConfig 
*global,
           per->outfile = strdup(mlfile->filename);
           if(!per->outfile) {
             result = CURLE_OUT_OF_MEMORY;
-            goto show_error;
+            break;
           }
           per->this_url = strdup(mlres->url);
           if(!per->this_url) {
             result = CURLE_OUT_OF_MEMORY;
-            goto show_error;
+            break;
           }
           per->mlfile = mlfile;
         }
         else {
-          if(urls) {
-            result = glob_next_url(&per->this_url, urls);
+          if(state->urls) {
+            result = glob_next_url(&per->this_url, state->urls);
             if(result)
-              goto show_error;
+              break;
           }
-          else if(!li) {
+          else if(!state->li) {
             per->this_url = strdup(urlnode->url);
             if(!per->this_url) {
               result = CURLE_OUT_OF_MEMORY;
-              goto show_error;
+              break;
             }
           }
           else
@@ -900,11 +935,11 @@ static CURLcode create_transfers(struct GlobalConfig 
*global,
           if(!per->this_url)
             break;
 
-          if(outfiles) {
-            per->outfile = strdup(outfiles);
+          if(state->outfiles) {
+            per->outfile = strdup(state->outfiles);
             if(!per->outfile) {
               result = CURLE_OUT_OF_MEMORY;
-              goto show_error;
+              break;
             }
           }
         }
@@ -922,22 +957,22 @@ static CURLcode create_transfers(struct GlobalConfig 
*global,
             /* extract the file name from the URL */
             result = get_url_file_name(&per->outfile, per->this_url);
             if(result)
-              goto show_error;
+              break;
             if(!*per->outfile && !config->content_disposition) {
               helpf(global->errors, "Remote file name has no length!\n");
               result = CURLE_WRITE_ERROR;
-              goto quit_urls;
+              break;
             }
           }
-          else if(urls) {
+          else if(state->urls) {
             /* fill '#1' ... '#9' terms from URL pattern */
             char *storefile = per->outfile;
-            result = glob_match_url(&per->outfile, storefile, urls);
+            result = glob_match_url(&per->outfile, storefile, state->urls);
             Curl_safefree(storefile);
             if(result) {
               /* bad globbing */
               warnf(config->global, "bad output glob!\n");
-              goto quit_urls;
+              break;
             }
           }
 
@@ -947,11 +982,8 @@ static CURLcode create_transfers(struct GlobalConfig 
*global,
           if(config->create_dirs || metalink) {
             result = create_dir_hierarchy(per->outfile, global->errors);
             /* create_dir_hierarchy shows error upon CURLE_WRITE_ERROR */
-            if(result == CURLE_WRITE_ERROR)
-              goto quit_urls;
-            if(result) {
-              goto show_error;
-            }
+            if(result)
+              break;
           }
 
           if((urlnode->flags & GETOUT_USEREMOTE)
@@ -977,16 +1009,16 @@ static CURLcode create_transfers(struct GlobalConfig 
*global,
 #ifdef __VMS
             /* open file for output, forcing VMS output format into stream
                mode which is needed for stat() call above to always work. */
-            FILE *file = fopen(outfile, config->resume_from?"ab":"wb",
+            FILE *file = fopen(outfile, "ab",
                                "ctx=stm", "rfm=stmlf", "rat=cr", "mrs=0");
 #else
             /* open file for output: */
-            FILE *file = fopen(per->outfile, config->resume_from?"ab":"wb");
+            FILE *file = fopen(per->outfile, "ab");
 #endif
             if(!file) {
               helpf(global->errors, "Can't open '%s'!\n", per->outfile);
               result = CURLE_WRITE_ERROR;
-              goto quit_urls;
+              break;
             }
             outs->fopened = TRUE;
             outs->stream = file;
@@ -1006,7 +1038,7 @@ static CURLcode create_transfers(struct GlobalConfig 
*global,
           char *nurl = add_file_name_to_url(per->this_url, per->uploadfile);
           if(!nurl) {
             result = CURLE_OUT_OF_MEMORY;
-            goto show_error;
+            break;
           }
           per->this_url = nurl;
         }
@@ -1065,7 +1097,7 @@ static CURLcode create_transfers(struct GlobalConfig 
*global,
         if(urlnum > 1 && !global->mute) {
           per->separator_err =
             aprintf("\n[%lu/%lu]: %s --> %s",
-                    li + 1, urlnum, per->this_url,
+                    state->li + 1, urlnum, per->this_url,
                     per->outfile ? per->outfile : "<stdout>");
           if(separator)
             per->separator = aprintf("%s%s", CURLseparator, per->this_url);
@@ -1103,7 +1135,7 @@ static CURLcode create_transfers(struct GlobalConfig 
*global,
 
           if(!urlbuffer) {
             result = CURLE_OUT_OF_MEMORY;
-            goto show_error;
+            break;
           }
 
           Curl_safefree(per->this_url); /* free previous URL */
@@ -1124,11 +1156,10 @@ static CURLcode create_transfers(struct GlobalConfig 
*global,
         config->terminal_binary_ok =
           (per->outfile && !strcmp(per->outfile, "-"));
 
-        /* avoid having this setopt added to the --libcurl source
-           output */
+        /* Avoid having this setopt added to the --libcurl source output. */
         result = curl_easy_setopt(curl, CURLOPT_SHARE, share);
         if(result)
-          goto show_error;
+          break;
 
         if(!config->tcp_nodelay)
           my_setopt(curl, CURLOPT_TCP_NODELAY, 0L);
@@ -1256,12 +1287,14 @@ static CURLcode create_transfers(struct GlobalConfig 
*global,
         case HTTPREQ_MIMEPOST:
           result = tool2curlmime(curl, config->mimeroot, &config->mimepost);
           if(result)
-            goto show_error;
+            break;
           my_setopt_mimepost(curl, CURLOPT_MIMEPOST, config->mimepost);
           break;
         default:
           break;
         }
+        if(result)
+          break;
 
         /* new in libcurl 7.10.6 (default is Basic) */
         if(config->authtype)
@@ -1371,7 +1404,7 @@ static CURLcode create_transfers(struct GlobalConfig 
*global,
                   "SSL_CERT_DIR environment variable":"--capath");
           }
           else if(result)
-            goto show_error;
+            break;
         }
         /* For the time being if --proxy-capath is not set then we use the
            --capath value for it, if any. See #1257 */
@@ -1388,7 +1421,7 @@ static CURLcode create_transfers(struct GlobalConfig 
*global,
             }
           }
           else if(result)
-            goto show_error;
+            break;
         }
 
         if(config->crlfile)
@@ -1503,7 +1536,7 @@ static CURLcode create_transfers(struct GlobalConfig 
*global,
               Curl_safefree(home);
             }
             if(result)
-              goto show_error;
+              break;
           }
         }
 
@@ -1607,7 +1640,7 @@ static CURLcode create_transfers(struct GlobalConfig 
*global,
         if(config->engine) {
           result = res_setopt_str(curl, CURLOPT_SSLENGINE, config->engine);
           if(result)
-            goto show_error;
+            break;
         }
 
         /* new in curl 7.10.7, extended in 7.19.4. Modified to use
@@ -1855,7 +1888,7 @@ static CURLcode create_transfers(struct GlobalConfig 
*global,
           outs->metalink_parser = metalink_parser_context_new();
           if(outs->metalink_parser == NULL) {
             result = CURLE_OUT_OF_MEMORY;
-            goto show_error;
+            break;
           }
           fprintf(config->global->errors,
                   "Metalink: parsing (%s) metalink/XML...\n", per->this_url);
@@ -1874,51 +1907,37 @@ static CURLcode create_transfers(struct GlobalConfig 
*global,
         per->retry_sleep = per->retry_sleep_default; /* ms */
         per->retrystart = tvnow();
 
-      } /* loop to the next URL */
-
-      show_error:
-      quit_urls:
-
-      if(urls) {
-        /* Free list of remaining URLs */
-        glob_cleanup(urls);
-        urls = NULL;
+        state->li++;
       }
-
-      if(infilenum > 1) {
-        /* when file globbing, exit loop upon critical error */
-        if(is_fatal_error(result))
-          break;
+      else {
+        /* Free this URL node data without destroying the
+           the node itself nor modifying next pointer. */
+        Curl_safefree(urlnode->outfile);
+        Curl_safefree(urlnode->infile);
+        urlnode->flags = 0;
+        glob_cleanup(state->urls);
+        state->urls = NULL;
+        state->urlnum = 0;
+
+        Curl_safefree(state->outfiles);
+        Curl_safefree(state->uploadfile);
+        if(state->inglob) {
+          /* Free list of globbed upload files */
+          glob_cleanup(state->inglob);
+          state->inglob = NULL;
+        }
+        config->state.urlnode = urlnode->next;
+        state->up = 0;
+        continue;
       }
-      else if(result)
-        /* when not file globbing, exit loop upon any error */
-        break;
-
-    } /* loop to the next globbed upload file */
-
-    /* Free loop-local allocated memory */
-
-    Curl_safefree(outfiles);
-
-    if(inglob) {
-      /* Free list of globbed upload files */
-      glob_cleanup(inglob);
-      inglob = NULL;
     }
+    break;
+  }
 
-    /* Free this URL node data without destroying the
-       the node itself nor modifying next pointer. */
-    Curl_safefree(urlnode->url);
-    Curl_safefree(urlnode->outfile);
-    Curl_safefree(urlnode->infile);
-    urlnode->flags = 0;
-
-  } /* for-loop through all URLs */
-  quit_curl:
-
-  /* Free function-local referenced allocated memory */
-  Curl_safefree(httpgetfields);
-
+  if(!*added || result) {
+    *added = FALSE;
+    single_transfer_cleanup(config);
+  }
   return result;
 }
 
@@ -1929,18 +1948,23 @@ static long all_added; /* number of easy handles 
currently added */
  * to add even after this call returns. sets 'addedp' to TRUE if one or more
  * transfers were added.
  */
-static int add_parallel_transfers(struct GlobalConfig *global,
-                                  CURLM *multi,
-                                  bool *morep,
-                                  bool *addedp)
+static CURLcode add_parallel_transfers(struct GlobalConfig *global,
+                                       CURLM *multi,
+                                       CURLSH *share,
+                                       bool *morep,
+                                       bool *addedp)
 {
   struct per_transfer *per;
-  CURLcode result;
+  CURLcode result = CURLE_OK;
   CURLMcode mcode;
   *addedp = FALSE;
   *morep = FALSE;
+  result = create_transfer(global, share, addedp);
+  if(result || !*addedp)
+    return result;
   for(per = transfers; per && (all_added < global->parallel_max);
       per = per->next) {
+    bool getadded = FALSE;
     if(per->added)
       /* already added */
       continue;
@@ -1956,6 +1980,10 @@ static int add_parallel_transfers(struct GlobalConfig 
*global,
     mcode = curl_multi_add_handle(multi, per->curl);
     if(mcode)
       return CURLE_OUT_OF_MEMORY;
+
+    result = create_transfer(global, share, &getadded);
+    if(result)
+      return result;
     per->added = TRUE;
     all_added++;
     *addedp = TRUE;
@@ -1968,7 +1996,6 @@ static CURLcode parallel_transfers(struct GlobalConfig 
*global,
                                    CURLSH *share)
 {
   CURLM *multi;
-  bool done = FALSE;
   CURLMcode mcode = CURLM_OK;
   CURLcode result = CURLE_OK;
   int still_running = 1;
@@ -1980,12 +2007,12 @@ static CURLcode parallel_transfers(struct GlobalConfig 
*global,
   if(!multi)
     return CURLE_OUT_OF_MEMORY;
 
-  result = add_parallel_transfers(global, multi,
+  result = add_parallel_transfers(global, multi, share,
                                   &more_transfers, &added_transfers);
   if(result)
     return result;
 
-  while(!done && !mcode && (still_running || more_transfers)) {
+  while(!mcode && (still_running || more_transfers)) {
     mcode = curl_multi_poll(multi, NULL, 0, 1000, NULL);
     if(!mcode)
       mcode = curl_multi_perform(multi, &still_running);
@@ -2006,18 +2033,19 @@ static CURLcode parallel_transfers(struct GlobalConfig 
*global,
           curl_easy_getinfo(easy, CURLINFO_PRIVATE, (void *)&ended);
           curl_multi_remove_handle(multi, easy);
 
-          result = post_transfer(global, share, ended, result, &retry);
+          result = post_per_transfer(global, ended, result, &retry);
           if(retry)
             continue;
           progress_finalize(ended); /* before it goes away */
           all_added--; /* one fewer added */
           removed = TRUE;
-          (void)del_transfer(ended);
+          (void)del_per_transfer(ended);
         }
       } while(msg);
       if(removed) {
         /* one or more transfers completed, add more! */
-        (void)add_parallel_transfers(global, multi, &more_transfers,
+        (void)add_parallel_transfers(global, multi, share,
+                                     &more_transfers,
                                      &added_transfers);
         if(added_transfers)
           /* we added new ones, make sure the loop doesn't exit yet */
@@ -2047,8 +2075,14 @@ static CURLcode serial_transfers(struct GlobalConfig 
*global,
   CURLcode returncode = CURLE_OK;
   CURLcode result = CURLE_OK;
   struct per_transfer *per;
+  bool added = FALSE;
+
+  result = create_transfer(global, share, &added);
+  if(result || !added)
+    return result;
   for(per = transfers; per;) {
     bool retry;
+    bool bailout = FALSE;
     result = pre_transfer(global, per);
     if(result)
       break;
@@ -2070,28 +2104,48 @@ static CURLcode serial_transfers(struct GlobalConfig 
*global,
     /* store the result of the actual transfer */
     returncode = result;
 
-    result = post_transfer(global, share, per, result, &retry);
+    result = post_per_transfer(global, per, result, &retry);
     if(retry)
       continue;
-    per = del_transfer(per);
 
     /* Bail out upon critical errors or --fail-early */
     if(result || is_fatal_error(returncode) ||
        (returncode && global->fail_early))
+      bailout = TRUE;
+    else {
+      /* setup the next one just before we delete this */
+      result = create_transfer(global, share, &added);
+      if(result)
+        bailout = TRUE;
+    }
+
+    /* Release metalink related resources here */
+    delete_metalinkfile(per->mlfile);
+
+    per = del_per_transfer(per);
+
+    if(bailout)
       break;
   }
   if(returncode)
     /* returncode errors have priority */
     result = returncode;
+
+  if(result)
+    single_transfer_cleanup(global->current);
+
   return result;
 }
 
-static CURLcode operate_do(struct GlobalConfig *global,
-                           struct OperationConfig *config,
-                           CURLSH *share)
+/* setup a transfer for the given config */
+static CURLcode transfer_per_config(struct GlobalConfig *global,
+                                    struct OperationConfig *config,
+                                    CURLSH *share,
+                                    bool *added)
 {
   CURLcode result = CURLE_OK;
   bool capath_from_env;
+  *added = FALSE;
 
   /* Check we have a url */
   if(!config->url_list || !config->url_list->url) {
@@ -2180,13 +2234,34 @@ static CURLcode operate_do(struct GlobalConfig *global,
   }
 
   if(!result)
-    /* loop through the list of given URLs */
-    result = create_transfers(global, config, share, capath_from_env);
+    result = single_transfer(global, config, share, capath_from_env, added);
 
   return result;
 }
 
-static CURLcode operate_transfers(struct GlobalConfig *global,
+/*
+ * 'create_transfer' gets the details and sets up a new transfer if 'added'
+ * returns TRUE.
+ */
+static CURLcode create_transfer(struct GlobalConfig *global,
+                                CURLSH *share,
+                                bool *added)
+{
+  CURLcode result = CURLE_OK;
+  *added = FALSE;
+  while(global->current) {
+    result = transfer_per_config(global, global->current, share, added);
+    if(!result && !*added) {
+      /* when one set is drained, continue to next */
+      global->current = global->current->next;
+      continue;
+    }
+    break;
+  }
+  return result;
+}
+
+static CURLcode run_all_transfers(struct GlobalConfig *global,
                                   CURLSH *share,
                                   CURLcode result)
 {
@@ -2206,13 +2281,17 @@ static CURLcode operate_transfers(struct GlobalConfig 
*global,
   /* cleanup if there are any left */
   for(per = transfers; per;) {
     bool retry;
-    (void)post_transfer(global, share, per, result, &retry);
+    CURLcode result2 = post_per_transfer(global, per, result, &retry);
+    if(!result)
+      /* don't overwrite the original error */
+      result = result2;
+
     /* Free list of given URLs */
     clean_getout(per->config);
 
     /* Release metalink related resources here */
     clean_metalink(per->config);
-    per = del_transfer(per);
+    per = del_per_transfer(per);
   }
 
   /* Reset the global config variables */
@@ -2223,7 +2302,7 @@ static CURLcode operate_transfers(struct GlobalConfig 
*global,
   return result;
 }
 
-CURLcode operate(struct GlobalConfig *config, int argc, argv_item_t argv[])
+CURLcode operate(struct GlobalConfig *global, int argc, argv_item_t argv[])
 {
   CURLcode result = CURLE_OK;
 
@@ -2236,18 +2315,18 @@ CURLcode operate(struct GlobalConfig *config, int argc, 
argv_item_t argv[])
   if((argc == 1) ||
      (!curl_strequal(argv[1], "-q") &&
       !curl_strequal(argv[1], "--disable"))) {
-    parseconfig(NULL, config); /* ignore possible failure */
+    parseconfig(NULL, global); /* ignore possible failure */
 
     /* If we had no arguments then make sure a url was specified in .curlrc */
-    if((argc < 2) && (!config->first->url_list)) {
-      helpf(config->errors, NULL);
+    if((argc < 2) && (!global->first->url_list)) {
+      helpf(global->errors, NULL);
       result = CURLE_FAILED_INIT;
     }
   }
 
   if(!result) {
     /* Parse the command line arguments */
-    ParameterError res = parse_args(config, argc, argv);
+    ParameterError res = parse_args(global, argc, argv);
     if(res) {
       result = CURLE_OK;
 
@@ -2270,7 +2349,7 @@ CURLcode operate(struct GlobalConfig *config, int argc, 
argv_item_t argv[])
     }
     else {
 #ifndef CURL_DISABLE_LIBCURL_OPTION
-      if(config->libcurl) {
+      if(global->libcurl) {
         /* Initialise the libcurl source output */
         result = easysrc_init();
       }
@@ -2279,11 +2358,11 @@ CURLcode operate(struct GlobalConfig *config, int argc, 
argv_item_t argv[])
       /* Perform the main operations */
       if(!result) {
         size_t count = 0;
-        struct OperationConfig *operation = config->first;
+        struct OperationConfig *operation = global->first;
         CURLSH *share = curl_share_init();
         if(!share) {
 #ifndef CURL_DISABLE_LIBCURL_OPTION
-          if(config->libcurl) {
+          if(global->libcurl) {
             /* Cleanup the libcurl source output */
             easysrc_cleanup();
           }
@@ -2305,30 +2384,24 @@ CURLcode operate(struct GlobalConfig *config, int argc, 
argv_item_t argv[])
         } while(!result && operation);
 
         /* Set the current operation pointer */
-        config->current = config->first;
-
-        /* Setup all transfers */
-        while(!result && config->current) {
-          result = operate_do(config, config->current, share);
-          config->current = config->current->next;
-        }
+        global->current = global->first;
 
         /* now run! */
-        result = operate_transfers(config, share, result);
+        result = run_all_transfers(global, share, result);
 
         curl_share_cleanup(share);
 #ifndef CURL_DISABLE_LIBCURL_OPTION
-        if(config->libcurl) {
+        if(global->libcurl) {
           /* Cleanup the libcurl source output */
           easysrc_cleanup();
 
           /* Dump the libcurl code if previously enabled */
-          dumpeasysrc(config);
+          dumpeasysrc(global);
         }
 #endif
       }
       else
-        helpf(config->errors, "out of memory\n");
+        helpf(global->errors, "out of memory\n");
     }
   }
 
diff --git a/src/tool_operhlp.c b/src/tool_operhlp.c
index f3fcc699f..543bf4302 100644
--- a/src/tool_operhlp.c
+++ b/src/tool_operhlp.c
@@ -37,18 +37,20 @@
 
 void clean_getout(struct OperationConfig *config)
 {
-  struct getout *next;
-  struct getout *node = config->url_list;
-
-  while(node) {
-    next = node->next;
-    Curl_safefree(node->url);
-    Curl_safefree(node->outfile);
-    Curl_safefree(node->infile);
-    Curl_safefree(node);
-    node = next;
+  if(config) {
+    struct getout *next;
+    struct getout *node = config->url_list;
+
+    while(node) {
+      next = node->next;
+      Curl_safefree(node->url);
+      Curl_safefree(node->outfile);
+      Curl_safefree(node->infile);
+      Curl_safefree(node);
+      node = next;
+    }
+    config->url_list = NULL;
   }
-  config->url_list = NULL;
 }
 
 bool output_expected(const char *url, const char *uploadfile)
diff --git a/src/tool_paramhlp.c b/src/tool_paramhlp.c
index c9dac4f0f..af47516b6 100644
--- a/src/tool_paramhlp.c
+++ b/src/tool_paramhlp.c
@@ -58,12 +58,17 @@ struct getout *new_getout(struct OperationConfig *config)
 
 ParameterError file2string(char **bufp, FILE *file)
 {
-  char *ptr;
   char *string = NULL;
-
   if(file) {
+    char *ptr;
+    size_t alloc = 512;
+    size_t alloc_needed;
     char buffer[256];
     size_t stringlen = 0;
+    string = malloc(alloc);
+    if(!string)
+      return PARAM_NO_MEM;
+
     while(fgets(buffer, sizeof(buffer), file)) {
       size_t buflen;
       ptr = strchr(buffer, '\r');
@@ -73,12 +78,24 @@ ParameterError file2string(char **bufp, FILE *file)
       if(ptr)
         *ptr = '\0';
       buflen = strlen(buffer);
-      ptr = realloc(string, stringlen + buflen + 1);
-      if(!ptr) {
-        Curl_safefree(string);
-        return PARAM_NO_MEM;
+      alloc_needed = stringlen + buflen + 1;
+      if(alloc < alloc_needed) {
+#if SIZEOF_SIZE_T < 8
+        if(alloc >= (size_t)SIZE_T_MAX/2) {
+          Curl_safefree(string);
+          return PARAM_NO_MEM;
+        }
+#endif
+        /* doubling is enough since the string to add is always max 256 bytes
+           and the alloc size start at 512 */
+        alloc *= 2;
+        ptr = realloc(string, alloc);
+        if(!ptr) {
+          Curl_safefree(string);
+          return PARAM_NO_MEM;
+        }
+        string = ptr;
       }
-      string = ptr;
       strcpy(string + stringlen, buffer);
       stringlen += buflen;
     }
diff --git a/src/tool_setopt.h b/src/tool_setopt.h
index 690b2c6f3..63401337f 100644
--- a/src/tool_setopt.h
+++ b/src/tool_setopt.h
@@ -33,7 +33,7 @@
     if(!tool_setopt_skip(opt)) {                \
       result = (v);                             \
       if(result)                                \
-        goto show_error;                        \
+        break;                                  \
     }                                           \
   } WHILE_FALSE
 
diff --git a/src/tool_urlglob.c b/src/tool_urlglob.c
index d6f7104ac..450cdcf32 100644
--- a/src/tool_urlglob.c
+++ b/src/tool_urlglob.c
@@ -488,6 +488,9 @@ void glob_cleanup(URLGlob* glob)
   size_t i;
   int elem;
 
+  if(!glob)
+    return;
+
   for(i = 0; i < glob->size; i++) {
     if((glob->pattern[i].type == UPTSet) &&
        (glob->pattern[i].content.Set.elements)) {
diff --git a/tests/certs/Server-localhost-lastSAN-sv.crl 
b/tests/certs/Server-localhost-lastSAN-sv.crl
index 0b4314124..f87677487 100644
--- a/tests/certs/Server-localhost-lastSAN-sv.crl
+++ b/tests/certs/Server-localhost-lastSAN-sv.crl
@@ -1,12 +1,12 @@
 -----BEGIN X509 CRL-----
 MIIB3DCBxQIBATANBgkqhkiG9w0BAQUFADBoMQswCQYDVQQGEwJOTjExMC8GA1UE
 CgwoRWRlbCBDdXJsIEFyY3RpYyBJbGx1ZGl1bSBSZXNlYXJjaCBDbG91ZDEmMCQG
-A1UEAwwdTm9ydGhlcm4gTm93aGVyZSBUcnVzdCBBbmNob3IXDTE4MDkwNTIzMjkw
-MVoXDTE4MTAwNTIzMjkwMVowGTAXAgYN+LitKqAXDTE4MDkwNTIzMjkwMVqgDjAM
-MAoGA1UdFAQDAgEBMA0GCSqGSIb3DQEBBQUAA4IBAQBc8MVCmUUhPb/yJ05wh1EA
-rBbLCjTYTDL9DW5YJIoBUKYWi5DGETS5BmgPU3ci6Pfa6eJ51oRurOCJHnL691Gp
-Y1d6R5CiM8mtHOPGCAgvvo0x+xJ/GzikxaggTDPA2CZWAFjBApMNdMvGTwurcnW9
-0jOl7zsfFoxSDlRqdFw7QW7Axju8vxRpMj6/pVBKmqgM+NUavcVPmRAYlsxCaeNH
-cdBviuw4qt3T6eLcb/RNIuCuXcp8a7ysqkGdSS/Pp/drOGZAmugbj1kmjS8b0n1M
-9L8wxG0k/TsgKSlWy+wbCJcUiYHgwzTd9i/XEdwxGvOnKFeiCvqShhkEG7QjfHs2
+A1UEAwwdTm9ydGhlcm4gTm93aGVyZSBUcnVzdCBBbmNob3IXDTE5MTEwMjEyNTMy
+N1oXDTE5MTIwMjEyNTMyN1owGTAXAgYOTbnGJLAXDTE5MTEwMjEyNTMyNlqgDjAM
+MAoGA1UdFAQDAgEBMA0GCSqGSIb3DQEBBQUAA4IBAQClxELmQvUD2S0UcNFbjMe/
+vv80HtpnwhTK356DUggVBh+EjvIXT4EakBbxxgDZMkaxJYH70RQ0UPLtB41pfmg3
+BS6Gl/0Vn+cAk8w/+dG4DHibdeqSPjIHCaAlkKqHV89Lp7IS6qrD0Bn/L7De6O7c
+4xLvRiDvx/cO5uAkX8vOtzKsOU/0U06QSSGK09dRL2mHbaH4FQj2PFMgcDd1GxAQ
+saii0bWZ6qLiYkQRtJGAplD+uqOaSSsioqVFy/NjaIip0axNtCG9sBhvp6lTpeiR
+Phl04I+WyKoP5f/NTU+fKbWarWka4evPSpRM2o9QYrYb/vj0TMK8lJ3JqgwlLrJ+
 -----END X509 CRL-----
diff --git a/tests/certs/Server-localhost-lastSAN-sv.crt 
b/tests/certs/Server-localhost-lastSAN-sv.crt
index b3116b695..578fff753 100644
--- a/tests/certs/Server-localhost-lastSAN-sv.crt
+++ b/tests/certs/Server-localhost-lastSAN-sv.crt
@@ -1,41 +1,42 @@
 Certificate:
     Data:
         Version: 3 (0x2)
-        Serial Number: 15361901406880 (0xdf8b8ad2aa0)
-    Signature Algorithm: sha1WithRSAEncryption
+        Serial Number:
+            0e:4d:b9:c6:24:b0
+        Signature Algorithm: sha256WithRSAEncryption
         Issuer:
             countryName               = NN
             organizationName          = Edel Curl Arctic Illudium Research 
Cloud
             commonName                = Northern Nowhere Trust Anchor
         Validity
-            Not Before: Sep  5 23:29:01 2018 GMT
-            Not After : Nov 22 23:29:01 2026 GMT
+            Not Before: Nov  2 12:53:25 2019 GMT
+            Not After : Jan 19 12:53:25 2028 GMT
         Subject:
             countryName               = NN
             organizationName          = Edel Curl Arctic Illudium Research 
Cloud
             commonName                = localhost.nn
         Subject Public Key Info:
             Public Key Algorithm: rsaEncryption
-                Public-Key: (2048 bit)
+                RSA Public-Key: (2048 bit)
                 Modulus:
-                    00:df:16:15:5f:2a:a4:50:cf:3a:a8:79:6e:22:8d:
-                    95:16:b7:4d:7d:d2:1f:4f:6d:2d:7a:7d:dc:8a:4f:
-                    53:7b:5f:c9:de:5c:88:6c:a2:74:26:35:1c:78:68:
-                    c1:60:25:a7:7b:b6:1a:9a:aa:33:d0:9f:5e:f2:2e:
-                    21:04:8c:0d:9a:28:f5:61:40:3c:34:1a:9b:8a:70:
-                    81:6d:83:9e:7c:d0:4c:d9:79:dc:37:d9:24:6e:73:
-                    c7:61:31:71:e9:f5:97:b7:65:ad:3d:f6:af:20:6f:
-                    56:b9:b5:42:b5:3d:96:61:31:eb:0d:4c:e9:f5:31:
-                    d3:25:af:40:b3:bb:81:04:7f:1a:ce:21:18:83:52:
-                    2d:51:31:ae:82:f9:cb:10:d3:d5:06:af:f8:71:e8:
-                    a3:c6:9f:7b:48:da:e2:28:af:1c:ff:41:6d:32:81:
-                    45:59:d7:64:e4:b1:d7:c9:86:6a:0b:65:71:66:d6:
-                    42:a8:67:fd:83:49:20:75:16:1e:bb:1b:85:5c:7e:
-                    e2:8f:5f:1c:81:d3:8a:95:d6:92:5c:9e:7f:a2:10:
-                    08:e1:df:ae:69:68:3f:8d:dd:79:4f:da:3f:79:b5:
-                    02:97:57:30:67:4d:3d:76:35:b5:4f:d1:5d:35:dd:
-                    d4:b5:6b:57:b2:e0:23:35:ad:1a:bf:6f:77:e6:bc:
-                    58:ed
+                    00:bd:97:0e:a7:6d:b6:73:8c:d0:21:6b:f3:36:74:
+                    5d:0a:aa:3a:f0:fa:6e:b1:5c:1c:13:74:ca:67:2b:
+                    22:03:d1:a6:3c:25:ef:87:4f:e8:38:9f:21:1d:2e:
+                    88:12:36:66:82:03:02:4c:f8:17:35:02:95:31:b1:
+                    53:40:21:24:2f:00:f0:bf:80:58:16:b1:92:b3:d3:
+                    78:bf:78:cb:0a:91:0c:d2:6d:5d:b2:1f:41:73:16:
+                    02:7c:1a:cd:16:25:c9:e1:1b:81:bd:84:93:4c:63:
+                    ce:38:f4:3e:ad:98:6b:00:89:a8:ba:f5:7e:08:83:
+                    f3:9a:f5:98:b8:9f:d6:d8:c7:d4:f3:07:1c:8f:ef:
+                    bc:29:10:60:8c:85:8b:4c:7a:73:c7:9f:a8:23:2f:
+                    c4:47:f5:18:85:98:fb:27:de:58:93:4b:08:a5:66:
+                    c9:df:db:f0:22:f8:64:9f:a1:56:89:97:ab:02:2c:
+                    5a:99:f2:6f:bf:72:31:90:22:32:ae:86:25:6b:13:
+                    c6:72:ec:df:2e:c8:12:00:c1:e3:38:b4:a0:40:ba:
+                    01:61:c2:d7:b1:ef:7d:4b:29:18:e2:fe:28:d0:98:
+                    e4:65:3f:4c:34:39:e4:82:a9:ca:b2:3d:c4:91:8f:
+                    a0:94:bf:e3:f8:b3:73:48:b7:fe:fa:04:43:e7:b5:
+                    bc:bd
                 Exponent: 65537 (0x10001)
         X509v3 extensions:
             X509v3 Subject Alternative Name: 
@@ -45,48 +46,48 @@ Certificate:
             X509v3 Extended Key Usage: 
                 TLS Web Server Authentication
             X509v3 Subject Key Identifier: 
-                7C:9A:EA:9B:92:98:FB:77:25:89:8B:EF:D3:F4:88:34:AF:EA:24:CC
+                4E:54:63:95:A1:58:0C:FA:BD:3E:58:26:AF:AF:A4:F3:66:1A:CB:25
             X509v3 Authority Key Identifier: 
                 
keyid:12:CA:BA:4B:46:04:A7:75:8A:2C:E8:0E:54:94:BC:12:65:A6:7B:CE
 
             X509v3 Basic Constraints: 
                 CA:FALSE
-    Signature Algorithm: sha1WithRSAEncryption
-         0f:97:60:47:2f:22:9f:d4:16:99:5a:ed:f4:b5:54:31:bf:9f:
-         a1:bd:2d:8b:eb:c1:24:db:73:30:c7:46:d6:4c:c8:c6:38:0c:
-         9a:e6:d6:5e:e8:a7:fb:9f:b6:44:66:73:43:86:46:10:c0:4c:
-         40:4e:c1:d7:e4:41:0b:f0:61:f0:6f:45:8c:5a:14:40:42:97:
-         c3:03:d0:ff:6d:4a:06:80:65:49:d4:2f:07:9d:86:59:6b:5b:
-         9e:bc:0c:46:8a:62:da:c0:22:af:13:6c:0d:9d:54:5e:46:53:
-         a5:aa:f2:80:44:c7:07:6e:f7:b0:4c:37:5c:31:08:a0:37:df:
-         8a:35:92:3c:8c:91:2f:64:4f:d3:a0:eb:95:b3:4a:9e:f7:ac:
-         25:ad:06:13:5c:dd:bd:d5:6b:74:8d:c7:c5:a6:b4:89:27:fd:
-         b7:c2:24:a7:6a:b3:64:e6:e6:31:91:35:fc:0e:15:14:38:d6:
-         39:b0:c4:b2:c1:c8:c7:ed:25:d7:b0:a9:b9:a0:70:33:42:90:
-         86:33:2a:d8:d5:8a:02:e6:ab:8d:92:d6:ae:b4:1d:e9:6c:22:
-         a5:2f:1a:48:48:2b:5c:b8:30:01:4b:27:1a:d3:cf:21:77:ab:
-         9f:bc:55:34:2e:9f:03:2b:17:0b:c3:44:8e:a8:94:ae:92:a2:
-         9a:33:c0:8e
+    Signature Algorithm: sha256WithRSAEncryption
+         2c:f9:48:33:7c:93:ca:3c:9c:58:92:8c:2b:87:61:9f:0d:9c:
+         9d:e8:43:43:12:d6:a3:40:71:ec:cb:31:76:80:68:b1:54:d1:
+         86:f4:b3:9e:c8:50:62:b4:87:12:be:9b:d6:3c:2b:cf:22:0e:
+         66:26:c2:31:dd:1f:c6:97:1e:61:a4:51:ea:68:75:81:66:b9:
+         3b:a6:1f:f6:80:ec:6b:aa:65:66:0c:02:ab:c9:57:bd:6a:4e:
+         6d:24:30:13:7b:65:17:60:9a:14:37:57:f7:22:66:55:7d:1a:
+         1a:5b:27:43:3b:d4:88:bc:2f:d3:d7:bb:d5:3f:9b:25:26:5d:
+         39:a0:4c:8a:84:2c:db:04:87:8a:df:49:7d:4b:d2:85:7a:09:
+         5e:df:6b:1b:b5:6e:9c:bb:2b:f6:c5:01:19:5a:87:d0:cf:16:
+         67:8b:54:41:87:c1:33:c3:21:f6:e5:84:d2:84:5d:da:82:cd:
+         39:4d:50:97:f3:83:37:9e:e5:04:0e:dc:c6:20:d1:b3:f6:c7:
+         3d:dd:95:be:8c:b9:72:72:7a:71:66:aa:4a:8e:cf:37:38:e8:
+         c8:06:69:68:8d:d8:d6:8b:4c:23:50:27:fa:e9:bb:2a:a6:89:
+         56:ad:be:4d:bd:be:0c:d7:55:b4:f4:b9:f7:6a:b5:2c:7f:5f:
+         9f:df:f6:61
 -----BEGIN CERTIFICATE-----
-MIID3jCCAsagAwIBAgIGDfi4rSqgMA0GCSqGSIb3DQEBBQUAMGgxCzAJBgNVBAYT
+MIID3jCCAsagAwIBAgIGDk25xiSwMA0GCSqGSIb3DQEBCwUAMGgxCzAJBgNVBAYT
 Ak5OMTEwLwYDVQQKDChFZGVsIEN1cmwgQXJjdGljIElsbHVkaXVtIFJlc2VhcmNo
 IENsb3VkMSYwJAYDVQQDDB1Ob3J0aGVybiBOb3doZXJlIFRydXN0IEFuY2hvcjAe
-Fw0xODA5MDUyMzI5MDFaFw0yNjExMjIyMzI5MDFaMFcxCzAJBgNVBAYTAk5OMTEw
+Fw0xOTExMDIxMjUzMjVaFw0yODAxMTkxMjUzMjVaMFcxCzAJBgNVBAYTAk5OMTEw
 LwYDVQQKDChFZGVsIEN1cmwgQXJjdGljIElsbHVkaXVtIFJlc2VhcmNoIENsb3Vk
 MRUwEwYDVQQDDAxsb2NhbGhvc3Qubm4wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAw
-ggEKAoIBAQDfFhVfKqRQzzqoeW4ijZUWt0190h9PbS16fdyKT1N7X8neXIhsonQm
-NRx4aMFgJad7thqaqjPQn17yLiEEjA2aKPVhQDw0GpuKcIFtg5580EzZedw32SRu
-c8dhMXHp9Ze3Za099q8gb1a5tUK1PZZhMesNTOn1MdMlr0Czu4EEfxrOIRiDUi1R
-Ma6C+csQ09UGr/hx6KPGn3tI2uIorxz/QW0ygUVZ12TksdfJhmoLZXFm1kKoZ/2D
-SSB1Fh67G4VcfuKPXxyB04qV1pJcnn+iEAjh365paD+N3XlP2j95tQKXVzBnTT12
-NbVP0V013dS1a1ey4CM1rRq/b3fmvFjtAgMBAAGjgZ4wgZswLAYDVR0RBCUwI4IK
+ggEKAoIBAQC9lw6nbbZzjNAha/M2dF0Kqjrw+m6xXBwTdMpnKyID0aY8Je+HT+g4
+nyEdLogSNmaCAwJM+Bc1ApUxsVNAISQvAPC/gFgWsZKz03i/eMsKkQzSbV2yH0Fz
+FgJ8Gs0WJcnhG4G9hJNMY8449D6tmGsAiai69X4Ig/Oa9Zi4n9bYx9TzBxyP77wp
+EGCMhYtMenPHn6gjL8RH9RiFmPsn3liTSwilZsnf2/Ai+GSfoVaJl6sCLFqZ8m+/
+cjGQIjKuhiVrE8Zy7N8uyBIAweM4tKBAugFhwtex731LKRji/ijQmORlP0w0OeSC
+qcqyPcSRj6CUv+P4s3NIt/76BEPntby9AgMBAAGjgZ4wgZswLAYDVR0RBCUwI4IK
 bG9jYWxob3N0MYIKbG9jYWxob3N0MoIJbG9jYWxob3N0MAsGA1UdDwQEAwIDqDAT
-BgNVHSUEDDAKBggrBgEFBQcDATAdBgNVHQ4EFgQUfJrqm5KY+3cliYvv0/SINK/q
-JMwwHwYDVR0jBBgwFoAUEsq6S0YEp3WKLOgOVJS8EmWme84wCQYDVR0TBAIwADAN
-BgkqhkiG9w0BAQUFAAOCAQEAD5dgRy8in9QWmVrt9LVUMb+fob0ti+vBJNtzMMdG
-1kzIxjgMmubWXuin+5+2RGZzQ4ZGEMBMQE7B1+RBC/Bh8G9FjFoUQEKXwwPQ/21K
-BoBlSdQvB52GWWtbnrwMRopi2sAirxNsDZ1UXkZTparygETHB273sEw3XDEIoDff
-ijWSPIyRL2RP06DrlbNKnvesJa0GE1zdvdVrdI3Hxaa0iSf9t8Ikp2qzZObmMZE1
-/A4VFDjWObDEssHIx+0l17CpuaBwM0KQhjMq2NWKAuarjZLWrrQd6WwipS8aSEgr
-XLgwAUsnGtPPIXern7xVNC6fAysXC8NEjqiUrpKimjPAjg==
+BgNVHSUEDDAKBggrBgEFBQcDATAdBgNVHQ4EFgQUTlRjlaFYDPq9Plgmr6+k82Ya
+yyUwHwYDVR0jBBgwFoAUEsq6S0YEp3WKLOgOVJS8EmWme84wCQYDVR0TBAIwADAN
+BgkqhkiG9w0BAQsFAAOCAQEALPlIM3yTyjycWJKMK4dhnw2cnehDQxLWo0Bx7Msx
+doBosVTRhvSznshQYrSHEr6b1jwrzyIOZibCMd0fxpceYaRR6mh1gWa5O6Yf9oDs
+a6plZgwCq8lXvWpObSQwE3tlF2CaFDdX9yJmVX0aGlsnQzvUiLwv09e71T+bJSZd
+OaBMioQs2wSHit9JfUvShXoJXt9rG7VunLsr9sUBGVqH0M8WZ4tUQYfBM8Mh9uWE
+0oRd2oLNOU1Ql/ODN57lBA7cxiDRs/bHPd2Vvoy5cnJ6cWaqSo7PNzjoyAZpaI3Y
+1otMI1An+um7KqaJVq2+Tb2+DNdVtPS592q1LH9fn9/2YQ==
 -----END CERTIFICATE-----
diff --git a/tests/certs/Server-localhost-lastSAN-sv.csr 
b/tests/certs/Server-localhost-lastSAN-sv.csr
index 78077bcd4..a113db635 100644
--- a/tests/certs/Server-localhost-lastSAN-sv.csr
+++ b/tests/certs/Server-localhost-lastSAN-sv.csr
@@ -1,16 +1,16 @@
 -----BEGIN CERTIFICATE REQUEST-----
 MIICnDCCAYQCAQAwVzELMAkGA1UEBhMCTk4xMTAvBgNVBAoMKEVkZWwgQ3VybCBB
 cmN0aWMgSWxsdWRpdW0gUmVzZWFyY2ggQ2xvdWQxFTATBgNVBAMMDGxvY2FsaG9z
-dC5ubjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAN8WFV8qpFDPOqh5
-biKNlRa3TX3SH09tLXp93IpPU3tfyd5ciGyidCY1HHhowWAlp3u2GpqqM9CfXvIu
-IQSMDZoo9WFAPDQam4pwgW2DnnzQTNl53DfZJG5zx2Excen1l7dlrT32ryBvVrm1
-QrU9lmEx6w1M6fUx0yWvQLO7gQR/Gs4hGINSLVExroL5yxDT1Qav+HHoo8afe0ja
-4iivHP9BbTKBRVnXZOSx18mGagtlcWbWQqhn/YNJIHUWHrsbhVx+4o9fHIHTipXW
-klyef6IQCOHfrmloP43deU/aP3m1ApdXMGdNPXY1tU/RXTXd1LVrV7LgIzWtGr9v
-d+a8WO0CAwEAAaAAMA0GCSqGSIb3DQEBCwUAA4IBAQCNGbWvnceLjA+R8+p1skgq
-0JxCZIUP/E8iOpg0eX2CjtU+9raYMNa7URtWa1kTSfxbuowPn21CSQmQ+1MDZv0Z
-UTAADKwXO6dDvXkYY4LwpRIozsz1zx1ulUaYmg4D2FPBIxg9QNLB0ic9+gUYdUEX
-Uw7vzxY8ExO99Z6rhJcNZPPYmj97MS/ZmBTZ8jxqjuOQ1R9mIhBvdsYdoDQR8SMK
-1b/0qH0F5Ly2iWt+pi+muoz+tYUyiXrIzYGF4+gImYBJEy35Pni/H8mMY62TxbWi
-QfhD9S8hxfT733X+UQQlQPToNDYdrmm/WcABOXrm8ESXfKvzs8aCodfCpDYIyxbu
+dC5ubjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAL2XDqdttnOM0CFr
+8zZ0XQqqOvD6brFcHBN0ymcrIgPRpjwl74dP6DifIR0uiBI2ZoIDAkz4FzUClTGx
+U0AhJC8A8L+AWBaxkrPTeL94ywqRDNJtXbIfQXMWAnwazRYlyeEbgb2Ek0xjzjj0
+Pq2YawCJqLr1fgiD85r1mLif1tjH1PMHHI/vvCkQYIyFi0x6c8efqCMvxEf1GIWY
++yfeWJNLCKVmyd/b8CL4ZJ+hVomXqwIsWpnyb79yMZAiMq6GJWsTxnLs3y7IEgDB
+4zi0oEC6AWHC17HvfUspGOL+KNCY5GU/TDQ55IKpyrI9xJGPoJS/4/izc0i3/voE
+Q+e1vL0CAwEAAaAAMA0GCSqGSIb3DQEBCwUAA4IBAQCpqiSx7VjqeQ2g8lpHF0Nb
+/10H1DqaK7Z3y49xFK3xxKWdxKUdq3Nf7JYlhKpWDYokrkw5W+nhGQILYt6ZD8tN
+tBZphyp3rvmTcewEFtbBne5N7OsAaanlBxeCLhnCICGhd+QCqYJKWe+zw8Oc5dCp
+SRmWEL5FTu9AavBc0LDx1gNBupDiXGhF+BptOzgfDbijd0aRgy9cYwAQ9kXo4H+y
+TH1ZYcSfB0gs7sShiY5FvuGr54Vv0czn+HqrdyWKDGLp7ilPYCT4WXBWfTon9j1H
+9NDomhrVme9IGKItYHg+p59WpevklW900X4NZCVspePgNeBOvXYbGqDEN01o1xIG
 -----END CERTIFICATE REQUEST-----
diff --git a/tests/certs/Server-localhost-lastSAN-sv.der 
b/tests/certs/Server-localhost-lastSAN-sv.der
index 220e7927b..c72bcb95d 100644
Binary files a/tests/certs/Server-localhost-lastSAN-sv.der and 
b/tests/certs/Server-localhost-lastSAN-sv.der differ
diff --git a/tests/certs/Server-localhost-lastSAN-sv.key 
b/tests/certs/Server-localhost-lastSAN-sv.key
index 618e83902..dae48284a 100644
--- a/tests/certs/Server-localhost-lastSAN-sv.key
+++ b/tests/certs/Server-localhost-lastSAN-sv.key
@@ -1,27 +1,27 @@
 -----BEGIN RSA PRIVATE KEY-----
-MIIEogIBAAKCAQEA3xYVXyqkUM86qHluIo2VFrdNfdIfT20ten3cik9Te1/J3lyI
-bKJ0JjUceGjBYCWne7Yamqoz0J9e8i4hBIwNmij1YUA8NBqbinCBbYOefNBM2Xnc
-N9kkbnPHYTFx6fWXt2WtPfavIG9WubVCtT2WYTHrDUzp9THTJa9As7uBBH8aziEY
-g1ItUTGugvnLENPVBq/4ceijxp97SNriKK8c/0FtMoFFWddk5LHXyYZqC2VxZtZC
-qGf9g0kgdRYeuxuFXH7ij18cgdOKldaSXJ5/ohAI4d+uaWg/jd15T9o/ebUCl1cw
-Z009djW1T9FdNd3UtWtXsuAjNa0av2935rxY7QIDAQABAoIBAFz/H7mkVQs62AET
-Xc4Zp2To1Oz2gwbhRGwju6QMnYh4zfZcLKLctf6XdV7cjIBAMiloKH8BJMh7J2Fd
-yXXTzHfPSztXQ8GUtfJoJAw7Kf5t9xtRqXO+mWlR6nOh4RLexng1cpq6Exc6UrTn
-0v8qxV2PKaVJwt3r/1FeVWKXb5kne/Ob4LS7c0xnVqc7TGPtxLdS5mU5jrt0ZdZl
-tcHulLX24rmxKcNvge6r2EiYuet3vUi1uuLBQbWUJIFRwetDufG/2e2ihOuvCj5s
-aYNlRAo0JUwWl7geicRUdxkCpV/Qld7aYldKIcsSzgl6GLpgNpHjUFBbJBGSng0S
-vA4CMQECgYEA9tseJG2IuudqDHnpuUxtnlfDJTfYjtBQnYG1ojbd9FUiuihv/B2K
-pJ5uuowpKSnXOwaHtzyQ6XJA7JChRcDmJ4rf6R/1B61+1XVasyi2WffTJHbKzUk+
-hBAUoGtJIvrChMOnAlQzifP8+b7ec/ghKy87dNlQzQlSunyEW6lAW/UCgYEA51mQ
-JOFsasSvioKilsJuFCcFInZCRTEMz7vK9HW2Qnv71b3xeB6aNoJA8zf1Gw9q5clN
-Yu+8pkGNsWeone8izTzzpgZGJmM/vLjSdIgaJytStha2FwlQxUjggOjSy1zIdW+v
-ROw6OaT2J5+Qw2ruWqSaw2fiDgOpBCJgfg95JhkCgYAy5SppyEuQfXXX7KrLkX5o
-Tx/k5Ia5qylzz/Jq53ULkyH9z6iHCnAzUJbzz0INQpsliEsi9FHMT8oi/A7EGulY
-7cEMh5I1awfjarawiYxPMFFQC0301U0WXVpjWLtTgu/n/47HZCTcJHnb5AZpUpdE
-GBDiHowSOgHcgR+o5lRmoQKBgFaPi0BRW+hi6S9RC5aO7vL5WpF3X/pVjO6Y3Co1
-dNlRXHuv0w5XnOmyOK0IDdxvG1cYx6yx+IrYUjTDjTJyjDnwiVVgWZT5Y5qwKIZT
-ej2Xlx3sR3s9EAyQ5Pc2pdBTSemuvQxzuqFg2H0g1eBYPRCLMCDW2JzXv8B9QE9K
-aNDZAoGAKbVakgVlwrGffJb5c6ZFF9W/WoJYXJRA2/tMqvOcaZwSNq0ySHI/uUyM
-3aexymibv5cGsFhtcr8vqxlX0PZ+PF2SRe/L58PmByEXGmyv6UZ/fhOCh8ttmPzt
-GIh5PiKOd7RR7ydFY22M2+uW99wMf5jSH6uX1DRATFLxJygbnHA=
+MIIEowIBAAKCAQEAvZcOp222c4zQIWvzNnRdCqo68PpusVwcE3TKZysiA9GmPCXv
+h0/oOJ8hHS6IEjZmggMCTPgXNQKVMbFTQCEkLwDwv4BYFrGSs9N4v3jLCpEM0m1d
+sh9BcxYCfBrNFiXJ4RuBvYSTTGPOOPQ+rZhrAImouvV+CIPzmvWYuJ/W2MfU8wcc
+j++8KRBgjIWLTHpzx5+oIy/ER/UYhZj7J95Yk0sIpWbJ39vwIvhkn6FWiZerAixa
+mfJvv3IxkCIyroYlaxPGcuzfLsgSAMHjOLSgQLoBYcLXse99SykY4v4o0JjkZT9M
+NDnkgqnKsj3EkY+glL/j+LNzSLf++gRD57W8vQIDAQABAoIBAQCC0wTKpdtbmtRX
+66y1a9B0NolblgPiISRCjLnKPSpIpldmc+r4XTxqLexkvaIppx5PIpJo2FzzOGgJ
+FUrUGspkIOr/yil+52PK8OcGgOziyrqlTdB0xDqelpZ6WuggG01WJ2v8gco+0TQR
+ewDxOxbDFTq4YARrDdqAmG6dH7baeMDvh6IVe/dkJOVlyh0MA2QP+VR6fDv73jUe
+3yW6G+hql9mjZK6Cgz2lWoeW7YXAvWtTXT68/bcZLO64oLyCjBmsbSrBRQN5m9M9
+dWJV5B0h02P+uMF5H+EAD3qN5I670iSY3d+FWBpd3cA2arRGWlUXNmCGG3CjLYUS
+wGw1lbFhAoGBAPG6JhdXAaH3DN9khp54plbFSIanvjWK8RAEaQgkurwDUL3o1LmC
+ObqiCmMTU25HRlwWkwlCxejHfzOEqFdwiX5QuNmYBE6TYHtmnWSJ5ebMG7SOtlIS
+9Z4dLNZz8j95OGKb3XI9qR0ItxsmuLgWvrJUayd0UXcU7BTzHCXGx99JAoGBAMjI
+0z5+DeTwBhDY1mIUY081FmhrT9PhFHGtRy2OIENW0ZhJ5yE+ygVQssnR+Lr/yl1p
+zGC+CM//5wmJ774Xx0reMsh/rgK4Z0Wq47JJFGo0RMfYVmlod0OndtdobDc7ds7t
+Q3wIGt2ZXW6BtzMo8KVUuuHL8QwZoZqJNe/7QE3VAoGAGGrRRjJHu/CUoEwrPP66
+7rDm7pMrJ4VtbEzFv0jWg/9hvI00T7jT1AJiQjfFibIxbUPqflj8XNMqCi4wQwTf
+Hp9QzMoKRVWlvVFUPL+hNXsQoWB5EjlQDjSsPs1ffwHjrDJKYCvSVVh4BooWxqGl
+iaX1XPrm77xxTHxyL26w6eECgYA176S3g9stpcCrY+RrInju/R7Q3Arsquj4BIk7
+VpOaI0dYdnnNN3XDacMtbec4LKBq6ZHKZyIs5dxldpVdZjvWA8x2ib3v4yNy1o4m
+BXWjdfkICjhkRnjLRsAo61cumx22Row7VF4LKzirB9NzvcqvTwyIvWU6T+RWhAdm
+OQM0JQKBgC+gmBGfnQShTRYlfpb4RVnDijPpC34AdEO7wdeMcdQK9KfWsLZT5y0w
+qoZhW9IPlu1dNRhwHqGHWu2CmQVwFpy5/ccpukCJfyZw7edbb9dIqzKlUWw8Jmmg
+C7WKz4z3mKkZrwptFxDu0dpQ644yOP/gnRaLLyP0zn/brmnYz09X
 -----END RSA PRIVATE KEY-----
diff --git a/tests/certs/Server-localhost-lastSAN-sv.pem 
b/tests/certs/Server-localhost-lastSAN-sv.pem
index c1684fdbb..42e4a1155 100644
--- a/tests/certs/Server-localhost-lastSAN-sv.pem
+++ b/tests/certs/Server-localhost-lastSAN-sv.pem
@@ -24,70 +24,71 @@ commonName_value              = localhost.nn
 # the certificate
 # some dhparam
 -----BEGIN RSA PRIVATE KEY-----
-MIIEogIBAAKCAQEA3xYVXyqkUM86qHluIo2VFrdNfdIfT20ten3cik9Te1/J3lyI
-bKJ0JjUceGjBYCWne7Yamqoz0J9e8i4hBIwNmij1YUA8NBqbinCBbYOefNBM2Xnc
-N9kkbnPHYTFx6fWXt2WtPfavIG9WubVCtT2WYTHrDUzp9THTJa9As7uBBH8aziEY
-g1ItUTGugvnLENPVBq/4ceijxp97SNriKK8c/0FtMoFFWddk5LHXyYZqC2VxZtZC
-qGf9g0kgdRYeuxuFXH7ij18cgdOKldaSXJ5/ohAI4d+uaWg/jd15T9o/ebUCl1cw
-Z009djW1T9FdNd3UtWtXsuAjNa0av2935rxY7QIDAQABAoIBAFz/H7mkVQs62AET
-Xc4Zp2To1Oz2gwbhRGwju6QMnYh4zfZcLKLctf6XdV7cjIBAMiloKH8BJMh7J2Fd
-yXXTzHfPSztXQ8GUtfJoJAw7Kf5t9xtRqXO+mWlR6nOh4RLexng1cpq6Exc6UrTn
-0v8qxV2PKaVJwt3r/1FeVWKXb5kne/Ob4LS7c0xnVqc7TGPtxLdS5mU5jrt0ZdZl
-tcHulLX24rmxKcNvge6r2EiYuet3vUi1uuLBQbWUJIFRwetDufG/2e2ihOuvCj5s
-aYNlRAo0JUwWl7geicRUdxkCpV/Qld7aYldKIcsSzgl6GLpgNpHjUFBbJBGSng0S
-vA4CMQECgYEA9tseJG2IuudqDHnpuUxtnlfDJTfYjtBQnYG1ojbd9FUiuihv/B2K
-pJ5uuowpKSnXOwaHtzyQ6XJA7JChRcDmJ4rf6R/1B61+1XVasyi2WffTJHbKzUk+
-hBAUoGtJIvrChMOnAlQzifP8+b7ec/ghKy87dNlQzQlSunyEW6lAW/UCgYEA51mQ
-JOFsasSvioKilsJuFCcFInZCRTEMz7vK9HW2Qnv71b3xeB6aNoJA8zf1Gw9q5clN
-Yu+8pkGNsWeone8izTzzpgZGJmM/vLjSdIgaJytStha2FwlQxUjggOjSy1zIdW+v
-ROw6OaT2J5+Qw2ruWqSaw2fiDgOpBCJgfg95JhkCgYAy5SppyEuQfXXX7KrLkX5o
-Tx/k5Ia5qylzz/Jq53ULkyH9z6iHCnAzUJbzz0INQpsliEsi9FHMT8oi/A7EGulY
-7cEMh5I1awfjarawiYxPMFFQC0301U0WXVpjWLtTgu/n/47HZCTcJHnb5AZpUpdE
-GBDiHowSOgHcgR+o5lRmoQKBgFaPi0BRW+hi6S9RC5aO7vL5WpF3X/pVjO6Y3Co1
-dNlRXHuv0w5XnOmyOK0IDdxvG1cYx6yx+IrYUjTDjTJyjDnwiVVgWZT5Y5qwKIZT
-ej2Xlx3sR3s9EAyQ5Pc2pdBTSemuvQxzuqFg2H0g1eBYPRCLMCDW2JzXv8B9QE9K
-aNDZAoGAKbVakgVlwrGffJb5c6ZFF9W/WoJYXJRA2/tMqvOcaZwSNq0ySHI/uUyM
-3aexymibv5cGsFhtcr8vqxlX0PZ+PF2SRe/L58PmByEXGmyv6UZ/fhOCh8ttmPzt
-GIh5PiKOd7RR7ydFY22M2+uW99wMf5jSH6uX1DRATFLxJygbnHA=
+MIIEowIBAAKCAQEAvZcOp222c4zQIWvzNnRdCqo68PpusVwcE3TKZysiA9GmPCXv
+h0/oOJ8hHS6IEjZmggMCTPgXNQKVMbFTQCEkLwDwv4BYFrGSs9N4v3jLCpEM0m1d
+sh9BcxYCfBrNFiXJ4RuBvYSTTGPOOPQ+rZhrAImouvV+CIPzmvWYuJ/W2MfU8wcc
+j++8KRBgjIWLTHpzx5+oIy/ER/UYhZj7J95Yk0sIpWbJ39vwIvhkn6FWiZerAixa
+mfJvv3IxkCIyroYlaxPGcuzfLsgSAMHjOLSgQLoBYcLXse99SykY4v4o0JjkZT9M
+NDnkgqnKsj3EkY+glL/j+LNzSLf++gRD57W8vQIDAQABAoIBAQCC0wTKpdtbmtRX
+66y1a9B0NolblgPiISRCjLnKPSpIpldmc+r4XTxqLexkvaIppx5PIpJo2FzzOGgJ
+FUrUGspkIOr/yil+52PK8OcGgOziyrqlTdB0xDqelpZ6WuggG01WJ2v8gco+0TQR
+ewDxOxbDFTq4YARrDdqAmG6dH7baeMDvh6IVe/dkJOVlyh0MA2QP+VR6fDv73jUe
+3yW6G+hql9mjZK6Cgz2lWoeW7YXAvWtTXT68/bcZLO64oLyCjBmsbSrBRQN5m9M9
+dWJV5B0h02P+uMF5H+EAD3qN5I670iSY3d+FWBpd3cA2arRGWlUXNmCGG3CjLYUS
+wGw1lbFhAoGBAPG6JhdXAaH3DN9khp54plbFSIanvjWK8RAEaQgkurwDUL3o1LmC
+ObqiCmMTU25HRlwWkwlCxejHfzOEqFdwiX5QuNmYBE6TYHtmnWSJ5ebMG7SOtlIS
+9Z4dLNZz8j95OGKb3XI9qR0ItxsmuLgWvrJUayd0UXcU7BTzHCXGx99JAoGBAMjI
+0z5+DeTwBhDY1mIUY081FmhrT9PhFHGtRy2OIENW0ZhJ5yE+ygVQssnR+Lr/yl1p
+zGC+CM//5wmJ774Xx0reMsh/rgK4Z0Wq47JJFGo0RMfYVmlod0OndtdobDc7ds7t
+Q3wIGt2ZXW6BtzMo8KVUuuHL8QwZoZqJNe/7QE3VAoGAGGrRRjJHu/CUoEwrPP66
+7rDm7pMrJ4VtbEzFv0jWg/9hvI00T7jT1AJiQjfFibIxbUPqflj8XNMqCi4wQwTf
+Hp9QzMoKRVWlvVFUPL+hNXsQoWB5EjlQDjSsPs1ffwHjrDJKYCvSVVh4BooWxqGl
+iaX1XPrm77xxTHxyL26w6eECgYA176S3g9stpcCrY+RrInju/R7Q3Arsquj4BIk7
+VpOaI0dYdnnNN3XDacMtbec4LKBq6ZHKZyIs5dxldpVdZjvWA8x2ib3v4yNy1o4m
+BXWjdfkICjhkRnjLRsAo61cumx22Row7VF4LKzirB9NzvcqvTwyIvWU6T+RWhAdm
+OQM0JQKBgC+gmBGfnQShTRYlfpb4RVnDijPpC34AdEO7wdeMcdQK9KfWsLZT5y0w
+qoZhW9IPlu1dNRhwHqGHWu2CmQVwFpy5/ccpukCJfyZw7edbb9dIqzKlUWw8Jmmg
+C7WKz4z3mKkZrwptFxDu0dpQ644yOP/gnRaLLyP0zn/brmnYz09X
 -----END RSA PRIVATE KEY-----
 Certificate:
     Data:
         Version: 3 (0x2)
-        Serial Number: 15361901406880 (0xdf8b8ad2aa0)
-    Signature Algorithm: sha1WithRSAEncryption
+        Serial Number:
+            0e:4d:b9:c6:24:b0
+        Signature Algorithm: sha256WithRSAEncryption
         Issuer:
             countryName               = NN
             organizationName          = Edel Curl Arctic Illudium Research 
Cloud
             commonName                = Northern Nowhere Trust Anchor
         Validity
-            Not Before: Sep  5 23:29:01 2018 GMT
-            Not After : Nov 22 23:29:01 2026 GMT
+            Not Before: Nov  2 12:53:25 2019 GMT
+            Not After : Jan 19 12:53:25 2028 GMT
         Subject:
             countryName               = NN
             organizationName          = Edel Curl Arctic Illudium Research 
Cloud
             commonName                = localhost.nn
         Subject Public Key Info:
             Public Key Algorithm: rsaEncryption
-                Public-Key: (2048 bit)
+                RSA Public-Key: (2048 bit)
                 Modulus:
-                    00:df:16:15:5f:2a:a4:50:cf:3a:a8:79:6e:22:8d:
-                    95:16:b7:4d:7d:d2:1f:4f:6d:2d:7a:7d:dc:8a:4f:
-                    53:7b:5f:c9:de:5c:88:6c:a2:74:26:35:1c:78:68:
-                    c1:60:25:a7:7b:b6:1a:9a:aa:33:d0:9f:5e:f2:2e:
-                    21:04:8c:0d:9a:28:f5:61:40:3c:34:1a:9b:8a:70:
-                    81:6d:83:9e:7c:d0:4c:d9:79:dc:37:d9:24:6e:73:
-                    c7:61:31:71:e9:f5:97:b7:65:ad:3d:f6:af:20:6f:
-                    56:b9:b5:42:b5:3d:96:61:31:eb:0d:4c:e9:f5:31:
-                    d3:25:af:40:b3:bb:81:04:7f:1a:ce:21:18:83:52:
-                    2d:51:31:ae:82:f9:cb:10:d3:d5:06:af:f8:71:e8:
-                    a3:c6:9f:7b:48:da:e2:28:af:1c:ff:41:6d:32:81:
-                    45:59:d7:64:e4:b1:d7:c9:86:6a:0b:65:71:66:d6:
-                    42:a8:67:fd:83:49:20:75:16:1e:bb:1b:85:5c:7e:
-                    e2:8f:5f:1c:81:d3:8a:95:d6:92:5c:9e:7f:a2:10:
-                    08:e1:df:ae:69:68:3f:8d:dd:79:4f:da:3f:79:b5:
-                    02:97:57:30:67:4d:3d:76:35:b5:4f:d1:5d:35:dd:
-                    d4:b5:6b:57:b2:e0:23:35:ad:1a:bf:6f:77:e6:bc:
-                    58:ed
+                    00:bd:97:0e:a7:6d:b6:73:8c:d0:21:6b:f3:36:74:
+                    5d:0a:aa:3a:f0:fa:6e:b1:5c:1c:13:74:ca:67:2b:
+                    22:03:d1:a6:3c:25:ef:87:4f:e8:38:9f:21:1d:2e:
+                    88:12:36:66:82:03:02:4c:f8:17:35:02:95:31:b1:
+                    53:40:21:24:2f:00:f0:bf:80:58:16:b1:92:b3:d3:
+                    78:bf:78:cb:0a:91:0c:d2:6d:5d:b2:1f:41:73:16:
+                    02:7c:1a:cd:16:25:c9:e1:1b:81:bd:84:93:4c:63:
+                    ce:38:f4:3e:ad:98:6b:00:89:a8:ba:f5:7e:08:83:
+                    f3:9a:f5:98:b8:9f:d6:d8:c7:d4:f3:07:1c:8f:ef:
+                    bc:29:10:60:8c:85:8b:4c:7a:73:c7:9f:a8:23:2f:
+                    c4:47:f5:18:85:98:fb:27:de:58:93:4b:08:a5:66:
+                    c9:df:db:f0:22:f8:64:9f:a1:56:89:97:ab:02:2c:
+                    5a:99:f2:6f:bf:72:31:90:22:32:ae:86:25:6b:13:
+                    c6:72:ec:df:2e:c8:12:00:c1:e3:38:b4:a0:40:ba:
+                    01:61:c2:d7:b1:ef:7d:4b:29:18:e2:fe:28:d0:98:
+                    e4:65:3f:4c:34:39:e4:82:a9:ca:b2:3d:c4:91:8f:
+                    a0:94:bf:e3:f8:b3:73:48:b7:fe:fa:04:43:e7:b5:
+                    bc:bd
                 Exponent: 65537 (0x10001)
         X509v3 extensions:
             X509v3 Subject Alternative Name: 
@@ -97,48 +98,48 @@ Certificate:
             X509v3 Extended Key Usage: 
                 TLS Web Server Authentication
             X509v3 Subject Key Identifier: 
-                7C:9A:EA:9B:92:98:FB:77:25:89:8B:EF:D3:F4:88:34:AF:EA:24:CC
+                4E:54:63:95:A1:58:0C:FA:BD:3E:58:26:AF:AF:A4:F3:66:1A:CB:25
             X509v3 Authority Key Identifier: 
                 
keyid:12:CA:BA:4B:46:04:A7:75:8A:2C:E8:0E:54:94:BC:12:65:A6:7B:CE
 
             X509v3 Basic Constraints: 
                 CA:FALSE
-    Signature Algorithm: sha1WithRSAEncryption
-         0f:97:60:47:2f:22:9f:d4:16:99:5a:ed:f4:b5:54:31:bf:9f:
-         a1:bd:2d:8b:eb:c1:24:db:73:30:c7:46:d6:4c:c8:c6:38:0c:
-         9a:e6:d6:5e:e8:a7:fb:9f:b6:44:66:73:43:86:46:10:c0:4c:
-         40:4e:c1:d7:e4:41:0b:f0:61:f0:6f:45:8c:5a:14:40:42:97:
-         c3:03:d0:ff:6d:4a:06:80:65:49:d4:2f:07:9d:86:59:6b:5b:
-         9e:bc:0c:46:8a:62:da:c0:22:af:13:6c:0d:9d:54:5e:46:53:
-         a5:aa:f2:80:44:c7:07:6e:f7:b0:4c:37:5c:31:08:a0:37:df:
-         8a:35:92:3c:8c:91:2f:64:4f:d3:a0:eb:95:b3:4a:9e:f7:ac:
-         25:ad:06:13:5c:dd:bd:d5:6b:74:8d:c7:c5:a6:b4:89:27:fd:
-         b7:c2:24:a7:6a:b3:64:e6:e6:31:91:35:fc:0e:15:14:38:d6:
-         39:b0:c4:b2:c1:c8:c7:ed:25:d7:b0:a9:b9:a0:70:33:42:90:
-         86:33:2a:d8:d5:8a:02:e6:ab:8d:92:d6:ae:b4:1d:e9:6c:22:
-         a5:2f:1a:48:48:2b:5c:b8:30:01:4b:27:1a:d3:cf:21:77:ab:
-         9f:bc:55:34:2e:9f:03:2b:17:0b:c3:44:8e:a8:94:ae:92:a2:
-         9a:33:c0:8e
+    Signature Algorithm: sha256WithRSAEncryption
+         2c:f9:48:33:7c:93:ca:3c:9c:58:92:8c:2b:87:61:9f:0d:9c:
+         9d:e8:43:43:12:d6:a3:40:71:ec:cb:31:76:80:68:b1:54:d1:
+         86:f4:b3:9e:c8:50:62:b4:87:12:be:9b:d6:3c:2b:cf:22:0e:
+         66:26:c2:31:dd:1f:c6:97:1e:61:a4:51:ea:68:75:81:66:b9:
+         3b:a6:1f:f6:80:ec:6b:aa:65:66:0c:02:ab:c9:57:bd:6a:4e:
+         6d:24:30:13:7b:65:17:60:9a:14:37:57:f7:22:66:55:7d:1a:
+         1a:5b:27:43:3b:d4:88:bc:2f:d3:d7:bb:d5:3f:9b:25:26:5d:
+         39:a0:4c:8a:84:2c:db:04:87:8a:df:49:7d:4b:d2:85:7a:09:
+         5e:df:6b:1b:b5:6e:9c:bb:2b:f6:c5:01:19:5a:87:d0:cf:16:
+         67:8b:54:41:87:c1:33:c3:21:f6:e5:84:d2:84:5d:da:82:cd:
+         39:4d:50:97:f3:83:37:9e:e5:04:0e:dc:c6:20:d1:b3:f6:c7:
+         3d:dd:95:be:8c:b9:72:72:7a:71:66:aa:4a:8e:cf:37:38:e8:
+         c8:06:69:68:8d:d8:d6:8b:4c:23:50:27:fa:e9:bb:2a:a6:89:
+         56:ad:be:4d:bd:be:0c:d7:55:b4:f4:b9:f7:6a:b5:2c:7f:5f:
+         9f:df:f6:61
 -----BEGIN CERTIFICATE-----
-MIID3jCCAsagAwIBAgIGDfi4rSqgMA0GCSqGSIb3DQEBBQUAMGgxCzAJBgNVBAYT
+MIID3jCCAsagAwIBAgIGDk25xiSwMA0GCSqGSIb3DQEBCwUAMGgxCzAJBgNVBAYT
 Ak5OMTEwLwYDVQQKDChFZGVsIEN1cmwgQXJjdGljIElsbHVkaXVtIFJlc2VhcmNo
 IENsb3VkMSYwJAYDVQQDDB1Ob3J0aGVybiBOb3doZXJlIFRydXN0IEFuY2hvcjAe
-Fw0xODA5MDUyMzI5MDFaFw0yNjExMjIyMzI5MDFaMFcxCzAJBgNVBAYTAk5OMTEw
+Fw0xOTExMDIxMjUzMjVaFw0yODAxMTkxMjUzMjVaMFcxCzAJBgNVBAYTAk5OMTEw
 LwYDVQQKDChFZGVsIEN1cmwgQXJjdGljIElsbHVkaXVtIFJlc2VhcmNoIENsb3Vk
 MRUwEwYDVQQDDAxsb2NhbGhvc3Qubm4wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAw
-ggEKAoIBAQDfFhVfKqRQzzqoeW4ijZUWt0190h9PbS16fdyKT1N7X8neXIhsonQm
-NRx4aMFgJad7thqaqjPQn17yLiEEjA2aKPVhQDw0GpuKcIFtg5580EzZedw32SRu
-c8dhMXHp9Ze3Za099q8gb1a5tUK1PZZhMesNTOn1MdMlr0Czu4EEfxrOIRiDUi1R
-Ma6C+csQ09UGr/hx6KPGn3tI2uIorxz/QW0ygUVZ12TksdfJhmoLZXFm1kKoZ/2D
-SSB1Fh67G4VcfuKPXxyB04qV1pJcnn+iEAjh365paD+N3XlP2j95tQKXVzBnTT12
-NbVP0V013dS1a1ey4CM1rRq/b3fmvFjtAgMBAAGjgZ4wgZswLAYDVR0RBCUwI4IK
+ggEKAoIBAQC9lw6nbbZzjNAha/M2dF0Kqjrw+m6xXBwTdMpnKyID0aY8Je+HT+g4
+nyEdLogSNmaCAwJM+Bc1ApUxsVNAISQvAPC/gFgWsZKz03i/eMsKkQzSbV2yH0Fz
+FgJ8Gs0WJcnhG4G9hJNMY8449D6tmGsAiai69X4Ig/Oa9Zi4n9bYx9TzBxyP77wp
+EGCMhYtMenPHn6gjL8RH9RiFmPsn3liTSwilZsnf2/Ai+GSfoVaJl6sCLFqZ8m+/
+cjGQIjKuhiVrE8Zy7N8uyBIAweM4tKBAugFhwtex731LKRji/ijQmORlP0w0OeSC
+qcqyPcSRj6CUv+P4s3NIt/76BEPntby9AgMBAAGjgZ4wgZswLAYDVR0RBCUwI4IK
 bG9jYWxob3N0MYIKbG9jYWxob3N0MoIJbG9jYWxob3N0MAsGA1UdDwQEAwIDqDAT
-BgNVHSUEDDAKBggrBgEFBQcDATAdBgNVHQ4EFgQUfJrqm5KY+3cliYvv0/SINK/q
-JMwwHwYDVR0jBBgwFoAUEsq6S0YEp3WKLOgOVJS8EmWme84wCQYDVR0TBAIwADAN
-BgkqhkiG9w0BAQUFAAOCAQEAD5dgRy8in9QWmVrt9LVUMb+fob0ti+vBJNtzMMdG
-1kzIxjgMmubWXuin+5+2RGZzQ4ZGEMBMQE7B1+RBC/Bh8G9FjFoUQEKXwwPQ/21K
-BoBlSdQvB52GWWtbnrwMRopi2sAirxNsDZ1UXkZTparygETHB273sEw3XDEIoDff
-ijWSPIyRL2RP06DrlbNKnvesJa0GE1zdvdVrdI3Hxaa0iSf9t8Ikp2qzZObmMZE1
-/A4VFDjWObDEssHIx+0l17CpuaBwM0KQhjMq2NWKAuarjZLWrrQd6WwipS8aSEgr
-XLgwAUsnGtPPIXern7xVNC6fAysXC8NEjqiUrpKimjPAjg==
+BgNVHSUEDDAKBggrBgEFBQcDATAdBgNVHQ4EFgQUTlRjlaFYDPq9Plgmr6+k82Ya
+yyUwHwYDVR0jBBgwFoAUEsq6S0YEp3WKLOgOVJS8EmWme84wCQYDVR0TBAIwADAN
+BgkqhkiG9w0BAQsFAAOCAQEALPlIM3yTyjycWJKMK4dhnw2cnehDQxLWo0Bx7Msx
+doBosVTRhvSznshQYrSHEr6b1jwrzyIOZibCMd0fxpceYaRR6mh1gWa5O6Yf9oDs
+a6plZgwCq8lXvWpObSQwE3tlF2CaFDdX9yJmVX0aGlsnQzvUiLwv09e71T+bJSZd
+OaBMioQs2wSHit9JfUvShXoJXt9rG7VunLsr9sUBGVqH0M8WZ4tUQYfBM8Mh9uWE
+0oRd2oLNOU1Ql/ODN57lBA7cxiDRs/bHPd2Vvoy5cnJ6cWaqSo7PNzjoyAZpaI3Y
+1otMI1An+um7KqaJVq2+Tb2+DNdVtPS592q1LH9fn9/2YQ==
 -----END CERTIFICATE-----
diff --git a/tests/certs/Server-localhost-lastSAN-sv.pub.der 
b/tests/certs/Server-localhost-lastSAN-sv.pub.der
index 5cd11dc13..480ee31a9 100644
Binary files a/tests/certs/Server-localhost-lastSAN-sv.pub.der and 
b/tests/certs/Server-localhost-lastSAN-sv.pub.der differ
diff --git a/tests/certs/Server-localhost-lastSAN-sv.pub.pem 
b/tests/certs/Server-localhost-lastSAN-sv.pub.pem
index aaca85708..5c1d3330f 100644
--- a/tests/certs/Server-localhost-lastSAN-sv.pub.pem
+++ b/tests/certs/Server-localhost-lastSAN-sv.pub.pem
@@ -1,9 +1,9 @@
 -----BEGIN PUBLIC KEY-----
-MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA3xYVXyqkUM86qHluIo2V
-FrdNfdIfT20ten3cik9Te1/J3lyIbKJ0JjUceGjBYCWne7Yamqoz0J9e8i4hBIwN
-mij1YUA8NBqbinCBbYOefNBM2XncN9kkbnPHYTFx6fWXt2WtPfavIG9WubVCtT2W
-YTHrDUzp9THTJa9As7uBBH8aziEYg1ItUTGugvnLENPVBq/4ceijxp97SNriKK8c
-/0FtMoFFWddk5LHXyYZqC2VxZtZCqGf9g0kgdRYeuxuFXH7ij18cgdOKldaSXJ5/
-ohAI4d+uaWg/jd15T9o/ebUCl1cwZ009djW1T9FdNd3UtWtXsuAjNa0av2935rxY
-7QIDAQAB
+MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAvZcOp222c4zQIWvzNnRd
+Cqo68PpusVwcE3TKZysiA9GmPCXvh0/oOJ8hHS6IEjZmggMCTPgXNQKVMbFTQCEk
+LwDwv4BYFrGSs9N4v3jLCpEM0m1dsh9BcxYCfBrNFiXJ4RuBvYSTTGPOOPQ+rZhr
+AImouvV+CIPzmvWYuJ/W2MfU8wccj++8KRBgjIWLTHpzx5+oIy/ER/UYhZj7J95Y
+k0sIpWbJ39vwIvhkn6FWiZerAixamfJvv3IxkCIyroYlaxPGcuzfLsgSAMHjOLSg
+QLoBYcLXse99SykY4v4o0JjkZT9MNDnkgqnKsj3EkY+glL/j+LNzSLf++gRD57W8
+vQIDAQAB
 -----END PUBLIC KEY-----
diff --git a/tests/data/Makefile.inc b/tests/data/Makefile.inc
index e2a04e181..846eb4046 100644
--- a/tests/data/Makefile.inc
+++ b/tests/data/Makefile.inc
@@ -57,7 +57,7 @@ test298 test299 test300 test301 test302 test303 test304 
test305 test306 \
 test307 test308 test309 test310 test311 test312 test313 test314 test315 \
 test316 test317 test318 test319 test320 test321 test322 test323 test324 \
 test325 test326 test327 test328 test329 test330 test331 test332 test333 \
-test334 test335 \
+test334 test335 test336 test337 test338 \
 test340 \
 \
 test350 test351 test352 test353 test354 test355 test356 \
@@ -83,7 +83,8 @@ test617 test618 test619 test620 test621 test622 test623 
test624 test625 \
 test626 test627 test628 test629 test630 test631 test632 test633 test634 \
 test635 test636 test637 test638 test639 test640 test641 test642 \
 test643 test644 test645 test646 test647 test648 test649 test650 test651 \
-test652 test653 test654 test655 test656 test658 test659 test660 \
+test652 test653 test654 test655 test656 test658 test659 test660 test661 \
+test662 test663 \
 \
 test700 test701 test702 test703 test704 test705 test706 test707 test708 \
 test709 test710 test711 test712 test713 test714 test715 test716 test717 \
@@ -129,7 +130,7 @@ test1128 test1129 test1130 test1131 test1132 test1133 
test1134 test1135 \
 test1136 test1137 test1138                   test1141 test1142 test1143 \
 test1144 test1145 test1146 test1147 test1148 test1149 test1150 test1151 \
 test1152 test1153 test1154 test1155 test1156 test1157 test1158 test1159 \
-test1160 test1161 test1162 test1163 test1164 test1165 \
+test1160 test1161 test1162 test1163 test1164 test1165 test1166 \
 test1170 test1171 test1172 test1174 \
 \
 test1200 test1201 test1202 test1203 test1204 test1205 test1206 test1207 \
@@ -178,18 +179,18 @@ test1540 test1541 \
 test1550 test1551 test1552 test1553 test1554 test1555 test1556 test1557 \
 test1558 test1559 test1560 test1561 test1562 test1563 \
 \
-test1590 test1591 test1592 test1593 test1594 \
+test1590 test1591 test1592 test1593 test1594 test1595 test1596 \
 \
 test1600 test1601 test1602 test1603 test1604 test1605 test1606 test1607 \
 test1608 test1609 test1620 test1621 \
 \
-test1650 test1651 test1652 test1653 test1654 \
+test1650 test1651 test1652 test1653 test1654 test1655 \
 \
 test1700 test1701 test1702 \
 \
 test1800 test1801 \
 \
-test1900 test1901 test1902 test1903 test1904 test1905 test1906 \
+test1900 test1901 test1902 test1903 test1904 test1905 test1906 test1907 \
 \
 test2000 test2001 test2002 test2003 test2004 test2005 test2006 test2007 \
 test2008 test2009 test2010 test2011 test2012 test2013 test2014 test2015 \
diff --git a/tests/data/test1002 b/tests/data/test1002
index c20995d90..5b6ef9431 100644
--- a/tests/data/test1002
+++ b/tests/data/test1002
@@ -65,6 +65,7 @@ http
 <features>
 !SSPI
 crypto
+proxy
 </features>
  <name>
 HTTP PUT with Digest auth, resumed upload and modified method, twice
diff --git a/tests/data/test1008 b/tests/data/test1008
index c123c5c0c..9fca722c8 100644
--- a/tests/data/test1008
+++ b/tests/data/test1008
@@ -88,6 +88,7 @@ http
 NTLM
 !SSPI
 debug
+proxy
 </features>
  <name>
 HTTP proxy CONNECT auth NTLM with chunked-encoded 407 response
diff --git a/tests/data/test1010 b/tests/data/test1010
index b2083af7b..ef073f5e5 100644
--- a/tests/data/test1010
+++ b/tests/data/test1010
@@ -49,9 +49,9 @@ PASS address@hidden
 PWD
 EPSV
 TYPE A
-LIST /list/this/path/1010/
+LIST /list/this/path/1010
 EPSV
-LIST /list/this/path/1010/
+LIST /list/this/path/1010
 QUIT
 </protocol>
 </verify>
diff --git a/tests/data/test1016 b/tests/data/test1016
index 4927f9eaa..01bf100f3 100644
--- a/tests/data/test1016
+++ b/tests/data/test1016
@@ -23,7 +23,7 @@ file
 X-Y range on a file:// URL to stdout
  </name>
 <command option="no-include">
--r 1-4 file://localhost/%PWD/log/test1016.txt 
+-r 1-4 file://localhost%FILE_PWD/log/test1016.txt 
 </command>
 <file name="log/test1016.txt">
 1234567890
diff --git a/tests/data/test1017 b/tests/data/test1017
index cfdd80f9e..9790d776d 100644
--- a/tests/data/test1017
+++ b/tests/data/test1017
@@ -24,7 +24,7 @@ file
 0-Y range on a file:// URL to stdout
  </name>
 <command option="no-include">
--r 0-3 file://localhost/%PWD/log/test1017.txt 
+-r 0-3 file://localhost%FILE_PWD/log/test1017.txt 
 </command>
 <file name="log/test1017.txt">
 1234567890
diff --git a/tests/data/test1018 b/tests/data/test1018
index 57487014f..ddf1f2595 100644
--- a/tests/data/test1018
+++ b/tests/data/test1018
@@ -23,7 +23,7 @@ file
 X-X range on a file:// URL to stdout
  </name>
 <command option="no-include">
--r 4-4 file://localhost/%PWD/log/test1018.txt 
+-r 4-4 file://localhost%FILE_PWD/log/test1018.txt 
 </command>
 <file name="log/test1018.txt">
 1234567890
diff --git a/tests/data/test1019 b/tests/data/test1019
index 054e38d5d..2a92ae5cf 100644
--- a/tests/data/test1019
+++ b/tests/data/test1019
@@ -24,7 +24,7 @@ file
 X- range on a file:// URL to stdout
  </name>
 <command option="no-include">
--r 7- file://localhost/%PWD/log/test1019.txt 
+-r 7- file://localhost%FILE_PWD/log/test1019.txt 
 </command>
 <file name="log/test1019.txt">
 1234567890
diff --git a/tests/data/test1020 b/tests/data/test1020
index 8e03a1758..0d88532f7 100644
--- a/tests/data/test1020
+++ b/tests/data/test1020
@@ -24,7 +24,7 @@ file
 -Y range on a file:// URL to stdout
  </name>
 <command option="no-include">
--r -9 file://localhost/%PWD/log/test1020.txt 
+-r -9 file://localhost%FILE_PWD/log/test1020.txt 
 </command>
 <file name="log/test1020.txt">
 1234567890
diff --git a/tests/data/test1021 b/tests/data/test1021
index 800973d1c..689341d60 100644
--- a/tests/data/test1021
+++ b/tests/data/test1021
@@ -93,6 +93,7 @@ http
 NTLM
 !SSPI
 debug
+proxy
 </features>
  <name>
 HTTP proxy CONNECT with any proxyauth and proxy offers NTLM and close
diff --git a/tests/data/test1059 b/tests/data/test1059
index 6820ea679..615e625c9 100644
--- a/tests/data/test1059
+++ b/tests/data/test1059
@@ -26,6 +26,7 @@ Content-Length: 0
 <client>
 <features>
 ftp
+proxy
 </features>
 <server>
 http
diff --git a/tests/data/test1060 b/tests/data/test1060
index 14fc7e53c..c4b264c10 100644
--- a/tests/data/test1060
+++ b/tests/data/test1060
@@ -869,6 +869,7 @@ http
 <features>
 !SSPI
 crypto
+proxy
 </features>
  <name>
 HTTP proxy CONNECT auth Digest, large headers and data
diff --git a/tests/data/test1061 b/tests/data/test1061
index c481d39c4..6ddddfee2 100644
--- a/tests/data/test1061
+++ b/tests/data/test1061
@@ -874,6 +874,7 @@ http
 <features>
 !SSPI
 crypto
+proxy
 </features>
  <name>
 HTTP proxy CONNECT auth Digest, large headers and chunked data
diff --git a/tests/data/test1063 b/tests/data/test1063
index 2979094dc..de2085d3e 100644
--- a/tests/data/test1063
+++ b/tests/data/test1063
@@ -28,7 +28,7 @@ Invalid large X- range on a file://
 # This range value is 2**32+7, which will be truncated to the valid value 7
 # if the large file support is not working correctly
  <command>
--r 4294967303- file://localhost/%PWD/log/test1063.txt 
+-r 4294967303- file://localhost%FILE_PWD/log/test1063.txt 
 </command>
 <file name="log/test1063.txt">
 1234567890
diff --git a/tests/data/test1077 b/tests/data/test1077
index a3c90245a..e917e8a56 100644
--- a/tests/data/test1077
+++ b/tests/data/test1077
@@ -44,6 +44,7 @@ http
 </server>
 <features>
 ftp
+proxy
 </features>
  <name>
 FTP over HTTP proxy with downgrade to HTTP 1.0
diff --git a/tests/data/test1078 b/tests/data/test1078
index a9bb771be..d705dbca4 100644
--- a/tests/data/test1078
+++ b/tests/data/test1078
@@ -45,6 +45,9 @@ HTTP 1.0 CONNECT with proxytunnel and downgrade GET to 
HTTP/1.0
  <command>
 --proxy1.0 %HOSTIP:%PROXYPORT -p 
http://%HOSTIP.1078:%HTTPPORT/we/want/that/page/1078 
http://%HOSTIP.1078:%HTTPPORT/we/want/that/page/1078
 </command>
+<features>
+proxy
+</features>
 </client>
 
 #
diff --git a/tests/data/test1087 b/tests/data/test1087
index d228976ac..883d98642 100644
--- a/tests/data/test1087
+++ b/tests/data/test1087
@@ -80,6 +80,9 @@ HTTP, proxy with --anyauth and Location: to new host
  <command>
 http://first.host.it.is/we/want/that/page/10871000 -x %HOSTIP:%HTTPPORT --user 
iam:myself --location --anyauth
 </command>
+<features>
+proxy
+</features>
 </client>
 
 #
diff --git a/tests/data/test1088 b/tests/data/test1088
index a807ce9e5..f2b6fc263 100644
--- a/tests/data/test1088
+++ b/tests/data/test1088
@@ -81,6 +81,9 @@ HTTP, proxy with --anyauth and Location: to new host using 
location-trusted
  <command>
 http://first.host.it.is/we/want/that/page/10881000 -x %HOSTIP:%HTTPPORT --user 
iam:myself --location-trusted --anyauth
 </command>
+<features>
+proxy
+</features>
 </client>
 
 #
diff --git a/tests/data/test1091 b/tests/data/test1091
index f3ce8608a..24669334b 100644
--- a/tests/data/test1091
+++ b/tests/data/test1091
@@ -34,7 +34,8 @@ FTP URL with type=i
 USER anonymous
 PASS address@hidden
 PWD
-CWD /tmp
+CWD /
+CWD tmp
 CWD moo
 EPSV
 TYPE I
diff --git a/tests/data/test1092 b/tests/data/test1092
index adef4320b..725a274ba 100644
--- a/tests/data/test1092
+++ b/tests/data/test1092
@@ -30,6 +30,7 @@ http
 </server>
 <features>
 ftp
+proxy
 </features>
  <name>
 FTP with type=i over HTTP proxy
diff --git a/tests/data/test1098 b/tests/data/test1098
index 980564810..0d397340c 100644
--- a/tests/data/test1098
+++ b/tests/data/test1098
@@ -29,6 +29,7 @@ http
 <features>
 http
 ftp
+proxy
 </features>
  <name>
 FTP RETR twice over proxy confirming persistent connection
diff --git a/tests/data/test1104 b/tests/data/test1104
index 570f13c51..e66da58ad 100644
--- a/tests/data/test1104
+++ b/tests/data/test1104
@@ -61,6 +61,9 @@ HTTP cookie expiry date at Jan 1 00:00:00 GMT 1970
  <command>
 http://%HOSTIP:%HTTPPORT/want/1104 -L -x %HOSTIP:%HTTPPORT -c 
log/cookies1104.jar
 </command>
+<features>
+proxy
+</features>
 </client>
 
 # Verify data after the test has been "shot"
diff --git a/tests/data/test1106 b/tests/data/test1106
index 0c6bec177..37a77e36e 100644
--- a/tests/data/test1106
+++ b/tests/data/test1106
@@ -24,6 +24,7 @@ hello
 <client>
 <features>
 ftp
+proxy
 </features>
 <server>
 http
diff --git a/tests/data/test1136 b/tests/data/test1136
index e18a92325..75b6ee838 100644
--- a/tests/data/test1136
+++ b/tests/data/test1136
@@ -33,6 +33,7 @@ boo
 <client>
 <features>
 PSL
+proxy
 </features>
 <server>
 http
diff --git a/tests/data/test1141 b/tests/data/test1141
index 9c41d3935..b0cff8e37 100644
--- a/tests/data/test1141
+++ b/tests/data/test1141
@@ -47,6 +47,9 @@ HTTP redirect to http:/// (three slashes!)
  <command>
 %HOSTIP:%HTTPPORT/want/1141 -L -x http://%HOSTIP:%HTTPPORT
 </command>
+<features>
+proxy
+</features>
 </client>
 
 # Verify data after the test has been "shot"
diff --git a/tests/data/test1142 b/tests/data/test1142
index 76c6bdf55..5f1e2b35a 100644
--- a/tests/data/test1142
+++ b/tests/data/test1142
@@ -42,6 +42,9 @@ HTTP redirect to http://// (four slashes!)
  <command>
 %HOSTIP:%HTTPPORT/want/1142 -L -x http://%HOSTIP:%HTTPPORT
 </command>
+<features>
+proxy
+</features>
 </client>
 
 # Verify data after the test has been "shot"
diff --git a/tests/data/test1149 b/tests/data/test1149
index f826391e9..f0c297dc1 100644
--- a/tests/data/test1149
+++ b/tests/data/test1149
@@ -57,7 +57,7 @@ TYPE A
 LIST
 CWD /
 EPSV
-LIST list/this/path/1149/
+LIST list/this/path/1149
 QUIT
 </protocol>
 </verify>
diff --git a/tests/data/test1150 b/tests/data/test1150
index ecd95d57e..e86c7e154 100644
--- a/tests/data/test1150
+++ b/tests/data/test1150
@@ -32,6 +32,9 @@ HTTP proxy with URLs using different ports
  <command>
 --proxy http://%HOSTIP:%HTTPPORT http://test.remote.example.com.1150:150/path 
http://test.remote.example.com.1150:1234/path/
 </command>
+<features>
+proxy
+</features>
 </client>
 
 # Verify data after the test has been "shot"
diff --git a/tests/data/test1162 b/tests/data/test1162
index 73e4646e1..b6b394139 100644
--- a/tests/data/test1162
+++ b/tests/data/test1162
@@ -31,6 +31,10 @@ FTP wildcard with crazy pattern
 <command>
 "ftp://%HOSTIP:%FTPPORT/fully_simulated/DOS/[*\\s-'tl"
 </command>
+<setenv>
+# Needed for MSYS2 to not convert backslash to forward slash
+MSYS2_ARG_CONV_EXCL=ftp://
+</setenv>
 </client>
 <verify>
 <protocol>
diff --git a/tests/data/test199 b/tests/data/test1166
similarity index 58%
copy from tests/data/test199
copy to tests/data/test1166
index 72675b535..3cae80ecd 100644
--- a/tests/data/test199
+++ b/tests/data/test1166
@@ -3,54 +3,48 @@
 <keywords>
 HTTP
 HTTP GET
-globbing
+followlocation
+cookies
 </keywords>
 </info>
-#
+
 # Server-side
 <reply>
 <data>
-HTTP/1.1 200 OK
-Date: Thu, 09 Nov 2010 14:49:00 GMT
-Server: test-server/fake
-Last-Modified: Tue, 13 Jun 2000 12:10:00 GMT
-ETag: "21025-dc7-39462498"
-Accept-Ranges: bytes
-Content-Length: 6
-Connection: close
-Content-Type: text/html
-Funny-head: yesyes
-
--foo-
+HTTP/1.1 200 OK
+Date: Thu, 09 Nov 2010 14:49:00 GMT
+Server: test-server/fake
+Set-Cookie: trackyou=want; path=/
+Content-Length: 68
+
+This server reply is for testing a Location: following with cookies
 </data>
 </reply>
 
-#
 # Client-side
 <client>
 <server>
 http
 </server>
  <name>
-HTTP with -d, -G and {}
+HTTP response with cookies but not receiving!
  </name>
  <command>
--d "foo=moo&moo=poo" "http://%HOSTIP:%HTTPPORT/{199,199}"; -G
+http://%HOSTIP:%HTTPPORT/want/1166 http://%HOSTIP:%HTTPPORT/want/1166
 </command>
 </client>
 
-#
 # Verify data after the test has been "shot"
 <verify>
 <strip>
 ^User-Agent:.*
 </strip>
 <protocol>
-GET /199?foo=moo&moo=poo HTTP/1.1
+GET /want/1166 HTTP/1.1
 Host: %HOSTIP:%HTTPPORT
 Accept: */*
 
-GET /199?foo=moo&moo=poo HTTP/1.1
+GET /want/1166 HTTP/1.1
 Host: %HOSTIP:%HTTPPORT
 Accept: */*
 
diff --git a/tests/data/test1213 b/tests/data/test1213
index 4f22f0d92..46a6938cb 100644
--- a/tests/data/test1213
+++ b/tests/data/test1213
@@ -35,6 +35,9 @@ HTTP with proxy and host-only URL
  <command>
 -x %HOSTIP:%HTTPPORT we.want.that.site.com.1213
 </command>
+<features>
+proxy
+</features>
 </client>
 
 # Verify data after the test has been "shot"
diff --git a/tests/data/test1214 b/tests/data/test1214
index 3eeb3e3ad..73c799a4a 100644
--- a/tests/data/test1214
+++ b/tests/data/test1214
@@ -35,6 +35,9 @@ HTTP with proxy and URL with ? and no slash separator
  <command>
 -x %HOSTIP:%HTTPPORT http://we.want.that.site.com.1214?moo=foo
 </command>
+<features>
+proxy
+</features>
 </client>
 
 # Verify data after the test has been "shot"
diff --git a/tests/data/test1215 b/tests/data/test1215
index 08d74369e..8edfd9b4e 100644
--- a/tests/data/test1215
+++ b/tests/data/test1215
@@ -60,6 +60,7 @@ Finally, this is the real page!
 NTLM
 !SSPI
 debug
+proxy
 </features>
 <server>
 http
diff --git a/tests/data/test1216 b/tests/data/test1216
index be0f5c77a..c4f977b60 100644
--- a/tests/data/test1216
+++ b/tests/data/test1216
@@ -39,6 +39,9 @@ example.fake  FALSE   /b      FALSE   0               moo1    
indeed
 example.fake   FALSE   /c      FALSE   2139150993      moo2    indeed
 example.fake   TRUE    /c      FALSE   2139150993      moo3    indeed
 </file>
+<features>
+proxy
+</features>
 </client>
 
 # Verify data after the test has been "shot"
diff --git a/tests/data/test1218 b/tests/data/test1218
index e3f1f6d04..37c8f4ef0 100644
--- a/tests/data/test1218
+++ b/tests/data/test1218
@@ -32,6 +32,9 @@ HTTP cookies and domains with same prefix
  <command>
 http://example.fake/c/1218 http://example.fake/c/1218 
http://bexample.fake/c/1218 -b nonexisting -x %HOSTIP:%HTTPPORT
 </command>
+<features>
+proxy
+</features>
 </client>
 
 # Verify data after the test has been "shot"
diff --git a/tests/data/test1220 b/tests/data/test1220
index 6752eb580..c8eb52cb1 100644
--- a/tests/data/test1220
+++ b/tests/data/test1220
@@ -21,7 +21,7 @@ file
 file:// URLs with query string
  </name>
 <command option="no-include">
-file://localhost/%PWD/log/test1220.txt?a_query=foobar#afragment
+file://localhost%FILE_PWD/log/test1220.txt?a_query=foobar#afragment
 </command>
 <file name="log/test1220.txt">
 contents in a single file
diff --git a/tests/data/test1225 b/tests/data/test1225
index 2b2519c94..09a1abb79 100644
--- a/tests/data/test1225
+++ b/tests/data/test1225
@@ -45,7 +45,6 @@ TYPE I
 SIZE 1225
 RETR 1225
 CWD /
-CWD /
 CWD foo
 CWD bar
 EPSV
diff --git a/tests/data/test1228 b/tests/data/test1228
index a7e56a797..50af6bc2c 100644
--- a/tests/data/test1228
+++ b/tests/data/test1228
@@ -31,6 +31,9 @@ HTTP cookie path match
  <command>
 http://example.fake/hoge/1228 http://example.fake/hogege/ -b nonexisting -x 
%HOSTIP:%HTTPPORT
 </command>
+<features>
+proxy
+</features>
 </client>
 
 # Verify data after the test has been "shot"
diff --git a/tests/data/test1230 b/tests/data/test1230
index ca2f6c67d..860ce21a4 100644
--- a/tests/data/test1230
+++ b/tests/data/test1230
@@ -43,6 +43,7 @@ mooooooo
 <client>
 <features>
 ipv6
+proxy
 </features>
 <server>
 http-proxy
diff --git a/tests/data/test1232 b/tests/data/test1232
index d0659f126..7425d44d8 100644
--- a/tests/data/test1232
+++ b/tests/data/test1232
@@ -41,6 +41,9 @@ HTTP URL with dotdot removal from path using an HTTP proxy
  <command>
 --proxy http://%HOSTIP:%HTTPPORT 
http://test.remote.haxx.se.1232:8990/../../hej/but/who/../1232?stupid=me/../1232#soo/../1232
 http://test.remote.haxx.se.1232:8990/../../hej/but/who/../12320001#/../12320001
 </command>
+<features>
+proxy
+</features>
 </client>
 
 #
diff --git a/tests/data/test1233 b/tests/data/test1233
index caf0527f2..1d4d3d561 100644
--- a/tests/data/test1233
+++ b/tests/data/test1233
@@ -2,6 +2,7 @@
 <info>
 <keywords>
 FTP
+connect to non-listen
 </keywords>
 </info>
 
diff --git a/tests/data/test1241 b/tests/data/test1241
index aaa568868..bc6c61801 100644
--- a/tests/data/test1241
+++ b/tests/data/test1241
@@ -40,6 +40,9 @@ HTTP _without_ dotdot removal
  <command>
 --path-as-is --proxy http://%HOSTIP:%HTTPPORT 
http://test.remote.haxx.se.1241:8990/../../hej/but/who/../1241?stupid=me/../1241#soo/../1241
 http://test.remote.haxx.se.1241:8990/../../hej/but/who/../12410001#/../12410001
 </command>
+<features>
+proxy
+</features>
 </client>
 
 #
diff --git a/tests/data/test1246 b/tests/data/test1246
index 65659292d..a35bc89c3 100644
--- a/tests/data/test1246
+++ b/tests/data/test1246
@@ -40,6 +40,9 @@ URL with '#' at end of host name instead of '/'
  <command>
 --proxy http://%HOSTIP:%HTTPPORT 
http://test.remote.haxx.se.1246:%HTTPPORT#@127.0.0.1/tricked.html 
no-scheme-url.com.1246:%HTTPPORT#@127.127.127.127/again.html
 </command>
+<features>
+proxy
+</features>
 </client>
 
 #
diff --git a/tests/data/test1253 b/tests/data/test1253
index 74002994b..8f240b0af 100644
--- a/tests/data/test1253
+++ b/tests/data/test1253
@@ -35,6 +35,9 @@ NO_PROXY=example.com
 <command>
 http://somewhere.example.com/1253 --proxy http://%HOSTIP:%HTTPPORT --noproxy 
%HOSTIP
 </command>
+<features>
+proxy
+</features>
 </client>
 
 # Verify data after the test has been "shot"
diff --git a/tests/data/test1254 b/tests/data/test1254
index 817b9342b..c05975488 100644
--- a/tests/data/test1254
+++ b/tests/data/test1254
@@ -35,6 +35,9 @@ NO_PROXY=example.com
 <command>
 http://somewhere.example.com/1254 --proxy http://%HOSTIP:%HTTPPORT --noproxy ""
 </command>
+<features>
+proxy
+</features>
 </client>
 
 # Verify data after the test has been "shot"
diff --git a/tests/data/test1256 b/tests/data/test1256
index 09c59f4ff..e86afbb53 100644
--- a/tests/data/test1256
+++ b/tests/data/test1256
@@ -36,6 +36,9 @@ NO_PROXY=example.com
 <command>
 http://somewhere.example.com/1256 --noproxy %HOSTIP
 </command>
+<features>
+proxy
+</features>
 </client>
 
 # Verify data after the test has been "shot"
diff --git a/tests/data/test1257 b/tests/data/test1257
index 6b7e93736..16a7c1af6 100644
--- a/tests/data/test1257
+++ b/tests/data/test1257
@@ -36,6 +36,9 @@ NO_PROXY=example.com
 <command>
 http://somewhere.example.com/1257 --noproxy ""
 </command>
+<features>
+proxy
+</features>
 </client>
 
 # Verify data after the test has been "shot"
diff --git a/tests/data/test1287 b/tests/data/test1287
index 46c292497..976fd6ecf 100644
--- a/tests/data/test1287
+++ b/tests/data/test1287
@@ -60,6 +60,9 @@ HTTP over proxy-tunnel ignore TE and CL in CONNECT 2xx 
responses
 <command>
 -v --proxytunnel -x %HOSTIP:%PROXYPORT 
http://test.1287:%HTTPPORT/we/want/that/page/1287
 </command>
+<features>
+proxy
+</features>
 </client>
 
 #
diff --git a/tests/data/test1288 b/tests/data/test1288
index 543aa3d6e..d8a1e524c 100644
--- a/tests/data/test1288
+++ b/tests/data/test1288
@@ -44,6 +44,9 @@ Suppress proxy CONNECT response headers
 <command>
 --proxytunnel --suppress-connect-headers --dump-header - --include --write-out 
"\nCONNECT CODE: %{http_connect}\nRECEIVED HEADER BYTE TOTAL: %{size_header}\n" 
--proxy %HOSTIP:%PROXYPORT http://%HOSTIP.1288:%HTTPPORT/we/want/that/page/1288
 </command>
+<features>
+proxy
+</features>
 </client>
 
 #
diff --git a/tests/data/test1314 b/tests/data/test1314
index 078ada64a..3963bd93f 100644
--- a/tests/data/test1314
+++ b/tests/data/test1314
@@ -56,6 +56,9 @@ HTTP Location: following a // prefixed url
  <command>
 http://firstplace.example.com/want/1314 -L -x http://%HOSTIP:%HTTPPORT
 </command>
+<features>
+proxy
+</features>
 </client>
 
 # Verify data after the test has been "shot"
diff --git a/tests/data/test1319 b/tests/data/test1319
index f50c53165..8fc968c89 100644
--- a/tests/data/test1319
+++ b/tests/data/test1319
@@ -50,6 +50,7 @@ http-proxy
 </server>
 <features>
 http
+proxy
 </features>
  <name>
 POP3 fetch tunneled through HTTP proxy
diff --git a/tests/data/test1320 b/tests/data/test1320
index 7a15f8091..da4079e93 100644
--- a/tests/data/test1320
+++ b/tests/data/test1320
@@ -27,6 +27,7 @@ http-proxy
 </server>
 <features>
 http
+proxy
 </features>
  <name>
 SMTP send tunneled through HTTP proxy
diff --git a/tests/data/test1321 b/tests/data/test1321
index 72a52c935..cc9117774 100644
--- a/tests/data/test1321
+++ b/tests/data/test1321
@@ -46,6 +46,7 @@ http-proxy
 </server>
 <features>
 http
+proxy
 </features>
  <name>
 IMAP FETCH tunneled through HTTP proxy
diff --git a/tests/data/test1329 b/tests/data/test1329
index 3d2d0cb6c..2cec0b895 100644
--- a/tests/data/test1329
+++ b/tests/data/test1329
@@ -17,6 +17,9 @@ http
  <command>
 http://%HOSTIP:%HTTPPORT/we/want/that/page/1329 -x "/server"
 </command>
+<features>
+proxy
+</features>
 </client>
 
 #
diff --git a/tests/data/test1331 b/tests/data/test1331
index 6b5823529..865abd969 100644
--- a/tests/data/test1331
+++ b/tests/data/test1331
@@ -64,6 +64,9 @@ HTTP --proxy-anyauth and 407 with cookies
  <command>
 -U myname:mypassword -x %HOSTIP:%HTTPPORT http://z.x.com/1331 --proxy-anyauth 
-c log/dump1331
 </command>
+<features>
+proxy
+</features>
 </client>
 
 # Verify data after the test has been "shot"
diff --git a/tests/data/test1415 b/tests/data/test1415
index 91abedc33..94ce02c59 100644
--- a/tests/data/test1415
+++ b/tests/data/test1415
@@ -46,6 +46,9 @@ TZ=GMT
 <command>
 http://example.com/we/want/1415 -b none -c log/jar1415.txt -x %HOSTIP:%HTTPPORT
 </command>
+<features>
+proxy
+</features>
 </client>
 
 # Verify data after the test has been "shot"
diff --git a/tests/data/test1421 b/tests/data/test1421
index 6c59b2160..dea49e781 100644
--- a/tests/data/test1421
+++ b/tests/data/test1421
@@ -35,6 +35,9 @@ Re-using HTTP proxy connection for two different host names
  <command>
 --proxy http://%HOSTIP:%HTTPPORT http://test.remote.haxx.se.1421:8990/ 
http://different.remote.haxx.se.1421:8990
 </command>
+<features>
+proxy
+</features>
 </client>
 
 #
diff --git a/tests/data/test1428 b/tests/data/test1428
index 59041ec96..f09c02dd9 100644
--- a/tests/data/test1428
+++ b/tests/data/test1428
@@ -52,6 +52,9 @@ HTTP over proxy-tunnel with --proxy-header and --header
  <command>
 http://test.1428:%HTTPPORT/we/want/that/page/1428 -p -x %HOSTIP:%PROXYPORT 
--user 'iam:my:;self' --header "header-type: server" --proxy-header 
"header-type: proxy"
 </command>
+<features>
+proxy
+</features>
 </client>
 
 #
diff --git a/tests/data/test143 b/tests/data/test143
index a4df8cbf1..0f36dd9c3 100644
--- a/tests/data/test143
+++ b/tests/data/test143
@@ -32,7 +32,8 @@ FTP URL with type=a
 USER anonymous
 PASS address@hidden
 PWD
-CWD /tmp
+CWD /
+CWD tmp
 CWD moo
 EPSV
 TYPE A
diff --git a/tests/data/test1445 b/tests/data/test1445
index f60483dcd..936c9aea6 100644
--- a/tests/data/test1445
+++ b/tests/data/test1445
@@ -21,7 +21,7 @@ perl %SRCDIR/libtest/test613.pl prepare %PWD/log/test1445.dir
 file:// with --remote-time
  </name>
  <command>
-file://localhost/%PWD/log/test1445.dir/plainfile.txt --remote-time
+file://localhost%FILE_PWD/log/test1445.dir/plainfile.txt --remote-time
 </command>
 <postcheck>
 perl %SRCDIR/libtest/test613.pl postprocess %PWD/log/test1445.dir && \
diff --git a/tests/data/test1447 b/tests/data/test1447
index e62cd72f2..d1182942e 100644
--- a/tests/data/test1447
+++ b/tests/data/test1447
@@ -18,6 +18,7 @@ none
 </server>
 <features>
 http
+proxy
 </features>
  <name>
 Provide illegal proxy name 
diff --git a/tests/data/test1455 b/tests/data/test1455
index 2684d34e9..cbe6fe22e 100644
--- a/tests/data/test1455
+++ b/tests/data/test1455
@@ -39,6 +39,9 @@ HTTP GET when PROXY Protocol enabled
 <command>
 http://%HOSTIP:%HTTPPORT/1455 --haproxy-protocol
 </command>
+<features>
+proxy
+</features>
 </client>
 
 #
diff --git a/tests/data/test1456 b/tests/data/test1456
index 45244e604..27d63f505 100644
--- a/tests/data/test1456
+++ b/tests/data/test1456
@@ -42,6 +42,9 @@ HTTP-IPv6 GET with PROXY protocol
  <command>
 -g "http://%HOST6IP:%HTTP6PORT/1456"; --local-port 44444 --haproxy-protocol
 </command>
+<features>
+proxy
+</features>
 </client>
 
 #
diff --git a/tests/data/test1509 b/tests/data/test1509
index b4bfc6603..faffc5d9e 100644
--- a/tests/data/test1509
+++ b/tests/data/test1509
@@ -53,7 +53,9 @@ http-proxy
 <tool>
 lib1509
 </tool>
-
+<features>
+proxy
+</features>
  <name>
 simple multi http:// through proxytunnel with authentication info
  </name>
diff --git a/tests/data/test1525 b/tests/data/test1525
index 595da5ea9..673e048c8 100644
--- a/tests/data/test1525
+++ b/tests/data/test1525
@@ -49,6 +49,9 @@ CURLOPT_PROXYHEADER is ignored CURLHEADER_UNIFIED
  <command>
  http://the.old.moo.1525:%HTTPPORT/1525 %HOSTIP:%PROXYPORT
 </command>
+<features>
+proxy
+</features>
 </client>
 
 # Verify data after the test has been "shot"
diff --git a/tests/data/test1526 b/tests/data/test1526
index aa111c890..f6fb44dd5 100644
--- a/tests/data/test1526
+++ b/tests/data/test1526
@@ -51,6 +51,9 @@ CURLOPT_PROXYHEADER: separate host/proxy headers
  <command>
  http://the.old.moo.1526:%HTTPPORT/1526 %HOSTIP:%PROXYPORT
 </command>
+<features>
+proxy
+</features>
 </client>
 
 # Verify data after the test has been "shot"
diff --git a/tests/data/test1527 b/tests/data/test1527
index e8d52794b..6bb87d14a 100644
--- a/tests/data/test1527
+++ b/tests/data/test1527
@@ -50,6 +50,9 @@ Check same headers are generated with CURLOPT_HEADEROPT == 
CURLHEADER_UNIFIED
  <command>
  http://the.old.moo.1527:%HTTPPORT/1527 %HOSTIP:%PROXYPORT
 </command>
+<features>
+proxy
+</features>
 </client>
 
 # Verify data after the test has been "shot"
diff --git a/tests/data/test1528 b/tests/data/test1528
index 876806af4..72c0a32d1 100644
--- a/tests/data/test1528
+++ b/tests/data/test1528
@@ -43,6 +43,9 @@ Separately specified proxy/server headers sent in a proxy GET
  <command>
  http://the.old.moo:%HTTPPORT/1528 %HOSTIP:%PROXYPORT
 </command>
+<features>
+proxy
+</features>
 </client>
 
 # Verify data after the test has been "shot"
diff --git a/tests/data/test1529 b/tests/data/test1529
index 33df26824..f7be50367 100644
--- a/tests/data/test1529
+++ b/tests/data/test1529
@@ -31,6 +31,9 @@ HTTP request-injection in URL sent over proxy
  <command>
  "http://the.old.moo:%HTTPPORT/1529"; %HOSTIP:%PROXYPORT
 </command>
+<features>
+proxy
+</features>
 </client>
 
 # it should be detected and an error should be reported
diff --git a/tests/data/test1591 b/tests/data/test1591
index e864fdbaa..526933a0e 100644
--- a/tests/data/test1591
+++ b/tests/data/test1591
@@ -19,7 +19,7 @@ Server: test-server/fake
 # Client-side
 <client>
 <features>
-HTTP
+http
 </features>
 <server>
 http
diff --git a/tests/data/test1596 b/tests/data/test1596
index 9a8cb480e..77a10f08a 100644
--- a/tests/data/test1596
+++ b/tests/data/test1596
@@ -12,7 +12,7 @@ If-Modified-Since
 # Server-side
 <reply>
 <data nocheck="yes">
-HTTP/1.1 503 Error
+HTTP/1.1 429 Too Many Requests
 Date: Thu, 11 Jul 2019 02:26:59 GMT
 Server: test-server/swsclose
 Retry-After: Thu, 11 Jul 2024 02:26:59 GMT
diff --git a/tests/data/test16 b/tests/data/test16
index 15f4c7a7b..399aa9420 100644
--- a/tests/data/test16
+++ b/tests/data/test16
@@ -33,6 +33,9 @@ HTTP with proxy authorization
  <command>
  -U 
fake@user:loooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooong
 -x %HOSTIP:%HTTPPORT http://we.want.that.site.com/16
 </command>
+<features>
+proxy
+</features>
 </client>
 
 # Verify data after the test has been "shot"
diff --git a/tests/data/test162 b/tests/data/test162
index ee2f40aa7..099641a87 100644
--- a/tests/data/test162
+++ b/tests/data/test162
@@ -28,6 +28,7 @@ isn't because there's no Proxy-Authorization: NTLM header
 <features>
 NTLM
 !SSPI
+proxy
 </features>
 <server>
 http
diff --git a/tests/data/test165 b/tests/data/test165
index b9a1ed786..9009425aa 100644
--- a/tests/data/test165
+++ b/tests/data/test165
@@ -29,6 +29,7 @@ http
 </server>
 <features>
 idn
+proxy
 </features>
 <setenv>
 LC_ALL=
diff --git a/tests/data/test1654 b/tests/data/test1654
index 5b32cb419..6a82daa08 100644
--- a/tests/data/test1654
+++ b/tests/data/test1654
@@ -53,6 +53,7 @@ h1 2.example.org 8080 h3 2.example.org 8080 "20190125 
22:34:21" 0 0
 h1 3.example.org 8080 h2 example.com 8080 "20190125 22:34:21" 0 0
 h1 3.example.org 8080 h3 yesyes.com 8080 "20190125 22:34:21" 0 0
 h2 example.org 80 h2 example.com 443 "20190124 22:36:21" 0 0
+h2 example.net 80 h2 example.net 443 "20190124 22:37:21" 0 0
 </file>
 </verify>
 </testcase>
diff --git a/tests/data/test1600 b/tests/data/test1655
similarity index 83%
copy from tests/data/test1600
copy to tests/data/test1655
index 88040747a..0c10bedf4 100644
--- a/tests/data/test1600
+++ b/tests/data/test1655
@@ -2,7 +2,7 @@
 <info>
 <keywords>
 unittest
-NTLM
+doh
 </keywords>
 </info>
 
@@ -14,13 +14,12 @@ none
 </server>
 <features>
 unittest
-NTLM
 </features>
  <name>
-NTLM unit tests
+unit test for doh_encode
  </name>
 <tool>
-unit1600
+unit1655
 </tool>
 </client>
 
diff --git a/tests/data/test167 b/tests/data/test167
index 0b14996a3..e08555ecc 100644
--- a/tests/data/test167
+++ b/tests/data/test167
@@ -45,6 +45,7 @@ http
 <features>
 !SSPI
 crypto
+proxy
 </features>
  <name>
 HTTP with proxy-requiring-Basic to site-requiring-Digest
diff --git a/tests/data/test168 b/tests/data/test168
index 20e0b6d9c..fb8762044 100644
--- a/tests/data/test168
+++ b/tests/data/test168
@@ -59,6 +59,7 @@ http
 <features>
 !SSPI
 crypto
+proxy
 </features>
  <name>
 HTTP with proxy-requiring-Digest to site-requiring-Digest
diff --git a/tests/data/test169 b/tests/data/test169
index bb089ca35..8013bcc17 100644
--- a/tests/data/test169
+++ b/tests/data/test169
@@ -79,6 +79,7 @@ http
 NTLM
 !SSPI
 debug
+proxy
 </features>
  <name>
 HTTP with proxy-requiring-NTLM to site-requiring-Digest
diff --git a/tests/data/test170 b/tests/data/test170
index 8ce7774f9..49d595bbc 100644
--- a/tests/data/test170
+++ b/tests/data/test170
@@ -20,6 +20,7 @@ http
 <features>
 NTLM
 !SSPI
+proxy
 </features>
  <name>
 HTTP POST with --proxy-ntlm and no SSL with no response
diff --git a/tests/data/test171 b/tests/data/test171
index 09e48b70a..482c0b7e2 100644
--- a/tests/data/test171
+++ b/tests/data/test171
@@ -33,6 +33,9 @@ HTTP, get cookie with dot prefixed full domain
  <command>
 -c log/jar171 -x %HOSTIP:%HTTPPORT http://z.x.com/171
 </command>
+<features>
+proxy
+</features>
 </client>
 
 # Verify data after the test has been "shot"
diff --git a/tests/data/test179 b/tests/data/test179
index f8f7811a7..3a94c00ba 100644
--- a/tests/data/test179
+++ b/tests/data/test179
@@ -38,6 +38,9 @@ supertrooper.fake     FALSE   /a      FALSE   2139150993      
mooo    indeed
 supertrooper.fake      FALSE   /b      FALSE   0               moo1    indeed
 supertrooper.fake      FALSE   /c      FALSE   2139150993      moo2    indeed
 </file>
+<features>
+proxy
+</features>
 </client>
 
 # Verify data after the test has been "shot"
diff --git a/tests/data/test183 b/tests/data/test183
index f34dc0c98..cf992a26f 100644
--- a/tests/data/test183
+++ b/tests/data/test183
@@ -30,6 +30,9 @@ HTTP GET two URLs over a single proxy with persistent 
connection
  <command>
 http://deathstar.another.galaxy/183 http://a.galaxy.far.far.away/183 --proxy 
http://%HOSTIP:%HTTPPORT
 </command>
+<features>
+proxy
+</features>
 </client>
 
 # Verify data after the test has been "shot"
diff --git a/tests/data/test184 b/tests/data/test184
index 8b09dde28..42e652e3b 100644
--- a/tests/data/test184
+++ b/tests/data/test184
@@ -50,6 +50,9 @@ HTTP replace Host: when following Location: to new host
  <command>
 http://deathstar.another.galaxy/184 -L -H "Host: 
another.visitor.stay.a.while.stay.foreeeeeever" --proxy http://%HOSTIP:%HTTPPORT
 </command>
+<features>
+proxy
+</features>
 </client>
 
 # Verify data after the test has been "shot"
diff --git a/tests/data/test185 b/tests/data/test185
index 298dd49ce..3bc58a041 100644
--- a/tests/data/test185
+++ b/tests/data/test185
@@ -50,6 +50,9 @@ HTTP replace Host: when following Location: on the same host
  <command>
 http://deathstar.another.galaxy/185 -L -H "Host: 
another.visitor.stay.a.while.stay.foreeeeeever" --proxy http://%HOSTIP:%HTTPPORT
 </command>
+<features>
+proxy
+</features>
 </client>
 
 # Verify data after the test has been "shot"
diff --git a/tests/data/test19 b/tests/data/test19
index dd60b8cf3..8e1bf5eb4 100644
--- a/tests/data/test19
+++ b/tests/data/test19
@@ -24,7 +24,7 @@ http
 attempt connect to non-listening socket
  </name>
  <command>
-%HOSTIP:60000
+%HOSTIP:2
 </command>
 </client>
 
diff --git a/tests/data/test1904 b/tests/data/test1904
index 08ad534a6..760285472 100644
--- a/tests/data/test1904
+++ b/tests/data/test1904
@@ -53,6 +53,9 @@ HTTP CONNECT with 204 response
  <command>
 http://test.1904:%HTTPPORT/we/want/that/page/1904 -p --proxy %HOSTIP:%PROXYPORT
 </command>
+<features>
+proxy
+</features>
 </client>
 
 #
diff --git a/tests/data/test1906 b/tests/data/test1907
similarity index 67%
copy from tests/data/test1906
copy to tests/data/test1907
index 0ff2b2be0..93f37051e 100644
--- a/tests/data/test1906
+++ b/tests/data/test1907
@@ -1,8 +1,7 @@
 <testcase>
 <info>
 <keywords>
-CURLOPT_CURLU
-CURLOPT_PORT
+CURLINFO_EFFECTIVE_URL
 </keywords>
 </info>
 
@@ -14,6 +13,7 @@ Date: Thu, 09 Nov 2010 14:49:00 GMT
 Server: test-server/fake
 Content-Type: text/html
 Funny-head: yesyes swsclose
+Content-Length: 0
 
 </data>
 </reply>
@@ -24,16 +24,14 @@ Funny-head: yesyes swsclose
 http
 </server>
  <name>
-CURLOPT_CURLU and CURLOPT_PORT
+CURLINFO_EFFECTIVE_URL with non-scheme URL
  </name>
 <tool>
-lib1906
+lib1907
 </tool>
 
-# The tool does two requesets, the first sets CURLOPT_PORT to 1
-# the second resets the port again and expects that request to work.
 <command>
-http://%HOSTIP:%HTTPPORT/1906
+%HOSTIP:%HTTPPORT/hello/../1907
 </command>
 </client>
 
@@ -43,10 +41,13 @@ http://%HOSTIP:%HTTPPORT/1906
 ^User-Agent:.*
 </strip>
 <protocol>
-GET /1906 HTTP/1.1
+GET /1907 HTTP/1.1
 Host: %HOSTIP:%HTTPPORT
 Accept: */*
 
 </protocol>
+<stdout>
+Effective URL: http://%HOSTIP:%HTTPPORT/1907
+</stdout>
 </verify>
 </testcase>
diff --git a/tests/data/test200 b/tests/data/test200
index c27f7c095..d8adda7d8 100644
--- a/tests/data/test200
+++ b/tests/data/test200
@@ -24,7 +24,7 @@ file
 basic file:// file
  </name>
 <command option="no-include">
-file://localhost/%PWD/log/test200.txt
+file://localhost%FILE_PWD/log/test200.txt
 </command>
 <file name="log/test200.txt">
 foo
diff --git a/tests/data/test2000 b/tests/data/test2000
index db1ba1330..a91dcd2c7 100644
--- a/tests/data/test2000
+++ b/tests/data/test2000
@@ -32,7 +32,7 @@ file
 FTP RETR followed by FILE
  </name>
 <command option="no-include">
-ftp://%HOSTIP:%FTPPORT/2000 file://localhost/%PWD/log/test2000.txt
+ftp://%HOSTIP:%FTPPORT/2000 file://localhost%FILE_PWD/log/test2000.txt
 </command>
 <file name="log/test2000.txt">
 foo
diff --git a/tests/data/test2001 b/tests/data/test2001
index 88a258ebb..9232499f9 100644
--- a/tests/data/test2001
+++ b/tests/data/test2001
@@ -49,7 +49,7 @@ file
 HTTP GET followed by FTP RETR followed by FILE
  </name>
 <command option="no-include">
-http://%HOSTIP:%HTTPPORT/20010001 ftp://%HOSTIP:%FTPPORT/20010002 
file://localhost/%PWD/log/test2001.txt
+http://%HOSTIP:%HTTPPORT/20010001 ftp://%HOSTIP:%FTPPORT/20010002 
file://localhost%FILE_PWD/log/test2001.txt
 </command>
 <file name="log/test2001.txt">
 foo
diff --git a/tests/data/test2002 b/tests/data/test2002
index 6dd2f9310..efe75fa3b 100644
--- a/tests/data/test2002
+++ b/tests/data/test2002
@@ -58,7 +58,7 @@ tftp
 HTTP GET followed by FTP RETR followed by FILE followed by TFTP RRQ
  </name>
 <command option="no-include">
-http://%HOSTIP:%HTTPPORT/20020001 ftp://%HOSTIP:%FTPPORT/20020002 
file://localhost/%PWD/log/test2002.txt tftp://%HOSTIP:%TFTPPORT//20020003
+http://%HOSTIP:%HTTPPORT/20020001 ftp://%HOSTIP:%FTPPORT/20020002 
file://localhost%FILE_PWD/log/test2002.txt tftp://%HOSTIP:%TFTPPORT//20020003
 </command>
 <file name="log/test2002.txt">
 foo
diff --git a/tests/data/test2003 b/tests/data/test2003
index 09bee8e22..68ae71429 100644
--- a/tests/data/test2003
+++ b/tests/data/test2003
@@ -58,7 +58,7 @@ tftp
 HTTP GET followed by FTP RETR followed by FILE followed by TFTP RRQ then again 
in reverse order
  </name>
 <command option="no-include">
-http://%HOSTIP:%HTTPPORT/20030001 ftp://%HOSTIP:%FTPPORT/20030002 
file://localhost/%PWD/log/test2003.txt tftp://%HOSTIP:%TFTPPORT//20030003 
tftp://%HOSTIP:%TFTPPORT//20030003 file://localhost/%PWD/log/test2003.txt 
ftp://%HOSTIP:%FTPPORT/20030002 http://%HOSTIP:%HTTPPORT/20030001
+http://%HOSTIP:%HTTPPORT/20030001 ftp://%HOSTIP:%FTPPORT/20030002 
file://localhost%FILE_PWD/log/test2003.txt tftp://%HOSTIP:%TFTPPORT//20030003 
tftp://%HOSTIP:%TFTPPORT//20030003 file://localhost%FILE_PWD/log/test2003.txt 
ftp://%HOSTIP:%FTPPORT/20030002 http://%HOSTIP:%HTTPPORT/20030001
 </command>
 <file name="log/test2003.txt">
 foo
diff --git a/tests/data/test2004 b/tests/data/test2004
index b17890b0f..5b3b68d0c 100644
--- a/tests/data/test2004
+++ b/tests/data/test2004
@@ -30,7 +30,7 @@ sftp
 TFTP RRQ followed by SFTP retrieval followed by FILE followed by SCP retrieval 
then again in reverse order
  </name>
 <command option="no-include">
---key curl_client_key --pubkey curl_client_key.pub -u %USER: 
tftp://%HOSTIP:%TFTPPORT//2004 
sftp://%HOSTIP:%SSHPORT%POSIX_PWD/log/test2004.txt 
file://localhost/%PWD/log/test2004.txt 
scp://%HOSTIP:%SSHPORT%POSIX_PWD/log/test2004.txt 
file://localhost/%PWD/log/test2004.txt 
sftp://%HOSTIP:%SSHPORT%POSIX_PWD/log/test2004.txt 
tftp://%HOSTIP:%TFTPPORT//2004 --insecure
+--key curl_client_key --pubkey curl_client_key.pub -u %USER: 
tftp://%HOSTIP:%TFTPPORT//2004 
sftp://%HOSTIP:%SSHPORT%POSIX_PWD/log/test2004.txt 
file://localhost%FILE_PWD/log/test2004.txt 
scp://%HOSTIP:%SSHPORT%POSIX_PWD/log/test2004.txt 
file://localhost%FILE_PWD/log/test2004.txt 
sftp://%HOSTIP:%SSHPORT%POSIX_PWD/log/test2004.txt 
tftp://%HOSTIP:%TFTPPORT//2004 --insecure
 </command>
 <file name="log/test2004.txt">
 This is test data
diff --git a/tests/data/test2005 b/tests/data/test2005
index 061f99b66..f78b4be56 100644
--- a/tests/data/test2005
+++ b/tests/data/test2005
@@ -78,7 +78,7 @@ Data delivered from an HTTP resource
 </file1>
 <file2 name="log/stdout2005">
 </file2>
-<file3 name="log/stderr2005">
+<file3 name="log/stderr2005" mode="text">
 Metalink: parsing (file://%PWD/log/test2005.metalink) metalink/XML...
 Metalink: parsing (file://%PWD/log/test2005.metalink) OK
 Metalink: fetching (log/download2005) from (http://%HOSTIP:%HTTPPORT/2005)...
diff --git a/tests/data/test2006 b/tests/data/test2006
index 4d08e0aad..1f5971726 100644
--- a/tests/data/test2006
+++ b/tests/data/test2006
@@ -98,7 +98,7 @@ Funny-head: yesyes
 </file2>
 <file3 name="log/stdout2006">
 </file3>
-<file4 name="log/stderr2006">
+<file4 name="log/stderr2006" mode="text">
 Metalink: parsing (file://%PWD/log/test2006.metalink) metalink/XML...
 Metalink: parsing (file://%PWD/log/test2006.metalink) OK
 Metalink: fetching (log/download2006) from (http://%HOSTIP:%HTTPPORT/2006)...
diff --git a/tests/data/test2007 b/tests/data/test2007
index bb4d5cde9..a8e5f1b45 100644
--- a/tests/data/test2007
+++ b/tests/data/test2007
@@ -102,7 +102,7 @@ Funny-head: yesyes
 </file2>
 <file3 name="log/stdout2007">
 </file3>
-<file4 name="log/stderr2007">
+<file4 name="log/stderr2007" mode="text">
 Metalink: parsing (file://%PWD/log/test2007.metalink) metalink/XML...
 Metalink: parsing (file://%PWD/log/test2007.metalink) OK
 Metalink: fetching (log/download2007) from (http://%HOSTIP:%HTTPPORT/2007)...
diff --git a/tests/data/test2008 b/tests/data/test2008
index d6bbf6b4b..1a0033285 100644
--- a/tests/data/test2008
+++ b/tests/data/test2008
@@ -94,7 +94,7 @@ Funny-head: yesyes
 </file2>
 <file3 name="log/stdout2008">
 </file3>
-<file4 name="log/stderr2008">
+<file4 name="log/stderr2008" mode="text">
 Metalink: parsing (file://%PWD/log/test2008.metalink) metalink/XML...
 Metalink: parsing (file://%PWD/log/test2008.metalink) OK
 Metalink: fetching (log/download2008) from (http://%HOSTIP:%HTTPPORT/2008)...
diff --git a/tests/data/test2009 b/tests/data/test2009
index 1a9335851..08308d03e 100644
--- a/tests/data/test2009
+++ b/tests/data/test2009
@@ -95,7 +95,7 @@ Funny-head: yesyes
 </file2>
 <file3 name="log/stdout2009">
 </file3>
-<file4 name="log/stderr2009">
+<file4 name="log/stderr2009" mode="text">
 Metalink: parsing (file://%PWD/log/test2009.metalink) metalink/XML...
 Metalink: parsing (file://%PWD/log/test2009.metalink) OK
 Metalink: fetching (log/download2009) from (http://%HOSTIP:%HTTPPORT/2009)...
diff --git a/tests/data/test2010 b/tests/data/test2010
index 1f5320fe9..068c481b5 100644
--- a/tests/data/test2010
+++ b/tests/data/test2010
@@ -94,7 +94,7 @@ Funny-head: yesyes
 </file2>
 <file3 name="log/stdout2010">
 </file3>
-<file4 name="log/stderr2010">
+<file4 name="log/stderr2010" mode="text">
 Metalink: parsing (file://%PWD/log/test2010.metalink) metalink/XML...
 Metalink: parsing (file://%PWD/log/test2010.metalink) OK
 Metalink: fetching (log/download2010) from (http://%HOSTIP:%HTTPPORT/2010)...
diff --git a/tests/data/test2011 b/tests/data/test2011
index 46785cf94..a84502317 100644
--- a/tests/data/test2011
+++ b/tests/data/test2011
@@ -78,7 +78,7 @@ Data delivered from an HTTP resource
 </file1>
 <file2 name="log/stdout2011">
 </file2>
-<file3 name="log/stderr2011">
+<file3 name="log/stderr2011" mode="text">
 Metalink: parsing (file://%PWD/log/test2011.metalink) metalink/XML...
 Metalink: parsing (file://%PWD/log/test2011.metalink) OK
 Metalink: fetching (log/download2011) from (http://%HOSTIP:%HTTPPORT/2011)...
diff --git a/tests/data/test2012 b/tests/data/test2012
index 59c042d12..6751269d5 100644
--- a/tests/data/test2012
+++ b/tests/data/test2012
@@ -77,7 +77,7 @@ Some contents delivered from an HTTP resource
 </file1>
 <file2 name="log/stdout2012">
 </file2>
-<file3 name="log/stderr2012">
+<file3 name="log/stderr2012" mode="text">
 Metalink: parsing (file://%PWD/log/test2012.metalink) metalink/XML...
 Metalink: parsing (file://%PWD/log/test2012.metalink) WARNING (digest missing)
 Metalink: fetching (log/download2012) from (http://%HOSTIP:%HTTPPORT/2012)...
diff --git a/tests/data/test2013 b/tests/data/test2013
index 0985b32de..f4d0c2475 100644
--- a/tests/data/test2013
+++ b/tests/data/test2013
@@ -66,7 +66,7 @@ perl %SRCDIR/libtest/notexists.pl log/2013 log/name2013 
/tmp/download2013
 <verify>
 <file1 name="log/stdout2013">
 </file1>
-<file2 name="log/stderr2013">
+<file2 name="log/stderr2013" mode="text">
 Metalink: parsing (file://%PWD/log/test2013.metalink) metalink/XML...
 Metalink: parsing (file://%PWD/log/test2013.metalink) WARNING (missing or 
invalid file name)
 Metalink: parsing (file://%PWD/log/test2013.metalink) FAILED
diff --git a/tests/data/test2014 b/tests/data/test2014
index d2dbdc7a7..65d2ec766 100644
--- a/tests/data/test2014
+++ b/tests/data/test2014
@@ -66,7 +66,7 @@ perl %SRCDIR/libtest/notexists.pl log/2014 log/name2014 
log/download2014
 <verify>
 <file1 name="log/stdout2014">
 </file1>
-<file2 name="log/stderr2014">
+<file2 name="log/stderr2014" mode="text">
 Metalink: parsing (file://%PWD/log/test2014.metalink) metalink/XML...
 Metalink: parsing (file://%PWD/log/test2014.metalink) WARNING (missing or 
invalid file name)
 Metalink: parsing (file://%PWD/log/test2014.metalink) FAILED
diff --git a/tests/data/test2015 b/tests/data/test2015
index a35f3117d..d356f88bc 100644
--- a/tests/data/test2015
+++ b/tests/data/test2015
@@ -66,7 +66,7 @@ perl %SRCDIR/libtest/notexists.pl log/2015 log/name2015 
log/download2015
 <verify>
 <file1 name="log/stdout2015">
 </file1>
-<file2 name="log/stderr2015">
+<file2 name="log/stderr2015" mode="text">
 Metalink: parsing (file://%PWD/log/test2015.metalink) metalink/XML...
 Metalink: parsing (file://%PWD/log/test2015.metalink) WARNING (missing or 
invalid file name)
 Metalink: parsing (file://%PWD/log/test2015.metalink) FAILED
diff --git a/tests/data/test2016 b/tests/data/test2016
index 572aa65c5..ff2862d51 100644
--- a/tests/data/test2016
+++ b/tests/data/test2016
@@ -66,7 +66,7 @@ perl %SRCDIR/libtest/notexists.pl log/2016 log/name2016 
log/download2016
 <verify>
 <file1 name="log/stdout2016">
 </file1>
-<file2 name="log/stderr2016">
+<file2 name="log/stderr2016" mode="text">
 Metalink: parsing (file://%PWD/log/test2016.metalink) metalink/XML...
 Metalink: parsing (file://%PWD/log/test2016.metalink) WARNING (missing or 
invalid file name)
 Metalink: parsing (file://%PWD/log/test2016.metalink) FAILED
diff --git a/tests/data/test2017 b/tests/data/test2017
index 15fd9347c..11c71c3db 100644
--- a/tests/data/test2017
+++ b/tests/data/test2017
@@ -66,7 +66,7 @@ perl %SRCDIR/libtest/notexists.pl log/2017 log/name2017
 <verify>
 <file1 name="log/stdout2017">
 </file1>
-<file2 name="log/stderr2017">
+<file2 name="log/stderr2017" mode="text">
 Metalink: parsing (file://%PWD/log/test2017.metalink) metalink/XML...
 Metalink: parsing (file://%PWD/log/test2017.metalink) WARNING (missing or 
invalid file name)
 Metalink: parsing (file://%PWD/log/test2017.metalink) FAILED
diff --git a/tests/data/test2018 b/tests/data/test2018
index 6d0652dcc..9fb433d94 100644
--- a/tests/data/test2018
+++ b/tests/data/test2018
@@ -66,7 +66,7 @@ perl %SRCDIR/libtest/notexists.pl log/2018 log/name2018 
log/.download2018
 <verify>
 <file1 name="log/stdout2018">
 </file1>
-<file2 name="log/stderr2018">
+<file2 name="log/stderr2018" mode="text">
 Metalink: parsing (file://%PWD/log/test2018.metalink) metalink/XML...
 Metalink: parsing (file://%PWD/log/test2018.metalink) WARNING (missing or 
invalid file name)
 Metalink: parsing (file://%PWD/log/test2018.metalink) FAILED
diff --git a/tests/data/test2019 b/tests/data/test2019
index b17b3f23f..abd8cad9a 100644
--- a/tests/data/test2019
+++ b/tests/data/test2019
@@ -66,7 +66,7 @@ perl %SRCDIR/libtest/notexists.pl log/2019 log/name2019
 <verify>
 <file1 name="log/stdout2019">
 </file1>
-<file2 name="log/stderr2019">
+<file2 name="log/stderr2019" mode="text">
 Metalink: parsing (file://%PWD/log/test2019.metalink) metalink/XML...
 Metalink: parsing (file://%PWD/log/test2019.metalink) WARNING (missing or 
invalid file name)
 Metalink: parsing (file://%PWD/log/test2019.metalink) FAILED
diff --git a/tests/data/test202 b/tests/data/test202
index 0b324b1d8..ad9d854d5 100644
--- a/tests/data/test202
+++ b/tests/data/test202
@@ -20,7 +20,7 @@ file
 two file:// URLs to stdout
  </name>
 <command option="no-include">
-file://localhost/%PWD/log/test202.txt FILE://localhost/%PWD/log/test202.txt
+file://localhost%FILE_PWD/log/test202.txt 
FILE://localhost%FILE_PWD/log/test202.txt
 </command>
 <file name="log/test202.txt">
 contents in a single file
diff --git a/tests/data/test2020 b/tests/data/test2020
index 8bf85a4d7..584f6df3b 100644
--- a/tests/data/test2020
+++ b/tests/data/test2020
@@ -66,7 +66,7 @@ perl %SRCDIR/libtest/notexists.pl log/2020 log/name2020
 <verify>
 <file1 name="log/stdout2020">
 </file1>
-<file2 name="log/stderr2020">
+<file2 name="log/stderr2020" mode="text">
 Metalink: parsing (file://%PWD/log/test2020.metalink) metalink/XML...
 Metalink: parsing (file://%PWD/log/test2020.metalink) WARNING (missing or 
invalid file name)
 Metalink: parsing (file://%PWD/log/test2020.metalink) FAILED
diff --git a/tests/data/test2021 b/tests/data/test2021
index 20a92244a..b0921d48a 100644
--- a/tests/data/test2021
+++ b/tests/data/test2021
@@ -66,7 +66,7 @@ perl %SRCDIR/libtest/notexists.pl log/2021 log/name2021 
log/download2021
 <verify>
 <file1 name="log/stdout2021">
 </file1>
-<file2 name="log/stderr2021">
+<file2 name="log/stderr2021" mode="text">
 Metalink: parsing (file://%PWD/log/test2021.metalink) metalink/XML...
 Metalink: parsing (file://%PWD/log/test2021.metalink) WARNING (missing or 
invalid file name)
 Metalink: parsing (file://%PWD/log/test2021.metalink) FAILED
diff --git a/tests/data/test2022 b/tests/data/test2022
index 4f4efd176..e9044732a 100644
--- a/tests/data/test2022
+++ b/tests/data/test2022
@@ -66,7 +66,7 @@ perl %SRCDIR/libtest/notexists.pl log/2022 log/name2022 
log/download2022
 <verify>
 <file1 name="log/stdout2022">
 </file1>
-<file2 name="log/stderr2022">
+<file2 name="log/stderr2022" mode="text">
 Metalink: parsing (file://%PWD/log/test2022.metalink) metalink/XML...
 Metalink: parsing (file://%PWD/log/test2022.metalink) WARNING (missing or 
invalid file name)
 Metalink: parsing (file://%PWD/log/test2022.metalink) FAILED
diff --git a/tests/data/test204 b/tests/data/test204
index 0ed94512f..5dad0149c 100644
--- a/tests/data/test204
+++ b/tests/data/test204
@@ -16,7 +16,7 @@ file
 "upload" with file://
  </name>
 <command option="no-include">
-file://localhost/%PWD/log/result204.txt -T log/upload204.txt
+file://localhost%FILE_PWD/log/result204.txt -T log/upload204.txt
 </command>
 <file name="log/upload204.txt">
 data
diff --git a/tests/data/test2050 b/tests/data/test2050
index 81ef79ffc..5eef3dae2 100644
--- a/tests/data/test2050
+++ b/tests/data/test2050
@@ -53,6 +53,9 @@ Connect to specific host via HTTP proxy (switch to tunnel 
mode automatically)
  <command>
 http://www.example.com.2050/2050 --connect-to 
::connect.example.com.2050:%HTTPPORT -x %HOSTIP:%PROXYPORT
 </command>
+<features>
+proxy
+</features>
 </client>
 
 #
diff --git a/tests/data/test2055 b/tests/data/test2055
index cca44942f..a5fac62be 100644
--- a/tests/data/test2055
+++ b/tests/data/test2055
@@ -51,7 +51,9 @@ socks5
  <name>
 Connect to specific host via SOCKS proxy and HTTP proxy (switch to tunnel mode 
automatically)
  </name>
-
+<features>
+proxy
+</features>
  <command>
 http://www.example.com.2055/2055 --connect-to 
::connect.example.com.2055:%HTTPPORT -x %HOSTIP:%PROXYPORT --preproxy 
socks5://%HOSTIP:%SOCKSPORT
 </command>
diff --git a/tests/data/test2058 b/tests/data/test2058
index 0082503e0..65a907f43 100644
--- a/tests/data/test2058
+++ b/tests/data/test2058
@@ -66,6 +66,7 @@ http
 <features>
 !SSPI
 crypto
+proxy
 </features>
  <name>
 HTTP POST --digest with PUT, resumed upload, modified method and SHA-256
diff --git a/tests/data/test2059 b/tests/data/test2059
index b74b0bdc1..4272a7b41 100644
--- a/tests/data/test2059
+++ b/tests/data/test2059
@@ -66,6 +66,7 @@ http
 <features>
 !SSPI
 crypto
+proxy
 </features>
  <name>
 HTTP POST --digest with PUT, resumed upload, modified method, SHA-512-256 and 
userhash=true
diff --git a/tests/data/test206 b/tests/data/test206
index 5f0c88562..f99ac4c71 100644
--- a/tests/data/test206
+++ b/tests/data/test206
@@ -73,6 +73,7 @@ http
 <features>
 !SSPI
 crypto
+proxy
 </features>
  <name>
 HTTP proxy CONNECT auth Digest
diff --git a/tests/data/test2060 b/tests/data/test2060
index f323eb520..a0b291dc2 100644
--- a/tests/data/test2060
+++ b/tests/data/test2060
@@ -66,6 +66,7 @@ http
 <features>
 !SSPI
 crypto
+proxy
 </features>
  <name>
 HTTP POST --digest with PUT, resumed upload, modified method, SHA-512-256 and 
userhash=false
diff --git a/tests/data/test2071 b/tests/data/test2071
index eddfa4df7..900f0d5a8 100644
--- a/tests/data/test2071
+++ b/tests/data/test2071
@@ -24,7 +24,7 @@ file
 basic file:// file with "127.0.0.1" hostname
  </name>
 <command option="no-include">
-file://127.0.0.1/%PWD/log/test2070.txt
+file://127.0.0.1%FILE_PWD/log/test2070.txt
 </command>
 <file name="log/test2070.txt">
 foo
diff --git a/tests/data/test208 b/tests/data/test208
index afb2566b5..1c86558a8 100644
--- a/tests/data/test208
+++ b/tests/data/test208
@@ -27,6 +27,7 @@ http
 </server>
 <features>
 ftp
+proxy
 </features>
  <name>
 HTTP PUT to a FTP URL with username+password - over HTTP proxy
diff --git a/tests/data/test209 b/tests/data/test209
index a0cc1a533..aded6d2d2 100644
--- a/tests/data/test209
+++ b/tests/data/test209
@@ -79,6 +79,7 @@ http
 NTLM
 !SSPI
 debug
+proxy
 </features>
  <name>
 HTTP proxy CONNECT auth NTLM
diff --git a/tests/data/test213 b/tests/data/test213
index 819c8016d..82d82c483 100644
--- a/tests/data/test213
+++ b/tests/data/test213
@@ -79,6 +79,7 @@ http
 NTLM
 !SSPI
 debug
+proxy
 </features>
  <name>
 HTTP 1.0 proxy CONNECT auth NTLM and then POST
diff --git a/tests/data/test217 b/tests/data/test217
index f10df566b..4ab51a2f3 100644
--- a/tests/data/test217
+++ b/tests/data/test217
@@ -34,6 +34,9 @@ HTTP proxy CONNECT to proxy returning 405
  <command>
 http://test.remote.example.com.217:%HTTPPORT/path/2170002 --proxy 
http://%HOSTIP:%HTTPPORT --proxytunnel -w "%{http_code} %{http_connect}\n"
 </command>
+<features>
+proxy
+</features>
 </client>
 
 # Verify data after the test has been "shot"
diff --git a/tests/data/test219 b/tests/data/test219
index be3f0f3c5..49f17e43f 100644
--- a/tests/data/test219
+++ b/tests/data/test219
@@ -18,6 +18,7 @@ none
 </server>
 <features>
 http
+proxy
 </features>
  <name>
 try using proxy with unsupported scheme
diff --git a/tests/data/test231 b/tests/data/test231
index 3d4bc7730..7254953e1 100644
--- a/tests/data/test231
+++ b/tests/data/test231
@@ -23,7 +23,7 @@ file
 file:// with resume
  </name>
 <command option="no-include">
-file://localhost/%PWD/log/test231.txt -C 10
+file://localhost%FILE_PWD/log/test231.txt -C 10
 </command>
 <file name="log/test231.txt">
 A01234567
diff --git a/tests/data/test233 b/tests/data/test233
index b631e52cf..a38d8c95e 100644
--- a/tests/data/test233
+++ b/tests/data/test233
@@ -67,6 +67,9 @@ HTTP, proxy, site+proxy auth and Location: to new host
  <command>
 http://first.host.it.is/we/want/that/page/233 -x %HOSTIP:%HTTPPORT --user 
iam:myself --proxy-user testing:this --location
 </command>
+<features>
+proxy
+</features>
 </client>
 
 #
diff --git a/tests/data/test234 b/tests/data/test234
index 1d2e05b39..9e197cd1d 100644
--- a/tests/data/test234
+++ b/tests/data/test234
@@ -69,6 +69,9 @@ HTTP, proxy, site+proxy auth and Location: to new host using 
location-trusted
  <command>
 http://first.host.it.is/we/want/that/page/234 -x %HOSTIP:%HTTPPORT --user 
iam:myself --proxy-user testing:this --location-trusted
 </command>
+<features>
+proxy
+</features>
 </client>
 
 #
diff --git a/tests/data/test239 b/tests/data/test239
index c9e5b6ac7..a6f1fd59b 100644
--- a/tests/data/test239
+++ b/tests/data/test239
@@ -54,6 +54,7 @@ http
 NTLM
 !SSPI
 debug
+proxy
 </features>
  <name>
 HTTP proxy-auth NTLM and then POST
diff --git a/tests/data/test243 b/tests/data/test243
index 7d1ed7d6e..f83218e14 100644
--- a/tests/data/test243
+++ b/tests/data/test243
@@ -75,6 +75,7 @@ http
 NTLM
 !SSPI
 debug
+proxy
 </features>
  <name>
 HTTP POST with --proxy-anyauth, picking NTLM
diff --git a/tests/data/test244 b/tests/data/test244
index 8ce4b6346..080163dd1 100644
--- a/tests/data/test244
+++ b/tests/data/test244
@@ -47,7 +47,7 @@ PASS address@hidden
 PWD
 EPSV
 TYPE A
-LIST fir#t/third/244/
+LIST fir#t/third/244
 QUIT
 </protocol>
 </verify>
diff --git a/tests/data/test256 b/tests/data/test256
index 17ae807d1..1567c6292 100644
--- a/tests/data/test256
+++ b/tests/data/test256
@@ -35,6 +35,9 @@ HTTP resume request over proxy with auth without server 
supporting it
 This text is here to simulate a partly downloaded file to resume
 download on.
 </file>
+<features>
+proxy
+</features>
 </client>
 
 # Verify data after the test has been "shot"
diff --git a/tests/data/test257 b/tests/data/test257
index 502448ddb..59a419bb1 100644
--- a/tests/data/test257
+++ b/tests/data/test257
@@ -73,7 +73,9 @@ HTTP Location: following with --netrc-optional
  <command>
 http://supersite.com/want/257 -L -x http://%HOSTIP:%HTTPPORT --netrc-optional 
--netrc-file log/netrc257
 </command>
-
+<features>
+proxy
+</features>
 # netrc auth for two out of three sites:
 <file name="log/netrc257">
 machine supersite.com login user1 password passwd1
diff --git a/tests/data/test258 b/tests/data/test258
index 98c340141..6c10564b8 100644
--- a/tests/data/test258
+++ b/tests/data/test258
@@ -58,6 +58,7 @@ http
 <features>
 !SSPI
 crypto
+proxy
 </features>
  <name>
 HTTP POST multipart without Expect: header using proxy anyauth (Digest)
diff --git a/tests/data/test259 b/tests/data/test259
index 6e1853601..58d25120f 100644
--- a/tests/data/test259
+++ b/tests/data/test259
@@ -54,6 +54,7 @@ http
 <features>
 !SSPI
 crypto
+proxy
 </features>
  <name>
 HTTP POST multipart with Expect: header using proxy anyauth (Digest)
diff --git a/tests/data/test263 b/tests/data/test263
index 5088141f5..e10c20741 100644
--- a/tests/data/test263
+++ b/tests/data/test263
@@ -25,6 +25,7 @@ hello
 <client>
 <features>
 ipv6
+proxy
 </features>
 <server>
 http-ipv6
diff --git a/tests/data/test264 b/tests/data/test264
index f4d171a16..5aca0e6c5 100644
--- a/tests/data/test264
+++ b/tests/data/test264
@@ -30,6 +30,9 @@ HTTP with proxy string including http:// and user+password
  <command>
 http://we.want.that.site.com/264 -x http://f%61ke:user@%HOSTIP:%HTTPPORT
 </command>
+<features>
+proxy
+</features>
 </client>
 
 # Verify data after the test has been "shot"
diff --git a/tests/data/test265 b/tests/data/test265
index 2e26ff5e5..ff7d5945e 100644
--- a/tests/data/test265
+++ b/tests/data/test265
@@ -80,6 +80,7 @@ http
 NTLM
 !SSPI
 debug
+proxy
 </features>
  <name>
 HTTP proxy CONNECT auth NTLM and then POST, response-body in the 407
diff --git a/tests/data/test275 b/tests/data/test275
index 802c4bbcc..6065b4d81 100644
--- a/tests/data/test275
+++ b/tests/data/test275
@@ -56,6 +56,9 @@ HTTP CONNECT with proxytunnel getting two URLs from the same 
host
  <command>
 http://remotesite.com.275:%HTTPPORT/we/want/that/page/275 -p -x 
%HOSTIP:%PROXYPORT --user iam:myself --proxy-user youare:yourself 
http://remotesite.com.275:%HTTPPORT/we/want/that/page/275
 </command>
+<features>
+proxy
+</features>
 </client>
 
 #
diff --git a/tests/data/test278 b/tests/data/test278
index 3112264a3..620f56b17 100644
--- a/tests/data/test278
+++ b/tests/data/test278
@@ -30,6 +30,9 @@ HTTP with proxy string including http:// and user+empty 
password
  <command>
 http://we.want.that.site.com/278 -x http://f%61ke:@%HOSTIP:%HTTPPORT
 </command>
+<features>
+proxy
+</features>
 </client>
 
 # Verify data after the test has been "shot"
diff --git a/tests/data/test279 b/tests/data/test279
index 47f8b687e..d5f4194f1 100644
--- a/tests/data/test279
+++ b/tests/data/test279
@@ -31,6 +31,9 @@ HTTP with proxy string including http:// and user only
  <command>
 http://we.want.that.site.com/279 -x http://f%61ke@%HOSTIP:%HTTPPORT
 </command>
+<features>
+proxy
+</features>
 </client>
 
 # Verify data after the test has been "shot"
diff --git a/tests/data/test287 b/tests/data/test287
index 6772e220f..7c29f7f02 100644
--- a/tests/data/test287
+++ b/tests/data/test287
@@ -30,6 +30,9 @@ HTTP proxy CONNECT with custom User-Agent header
  <command>
 http://test.remote.example.com.287:%HTTPPORT/path/287 -H "User-Agent: 
looser/2015" --proxy http://%HOSTIP:%HTTPPORT --proxytunnel --proxy-header 
"User-Agent: looser/2007"
 </command>
+<features>
+proxy
+</features>
 </client>
 
 # Verify data after the test has been "shot"
diff --git a/tests/data/test288 b/tests/data/test288
index 9f8f6e121..e62eabd3c 100644
--- a/tests/data/test288
+++ b/tests/data/test288
@@ -31,7 +31,7 @@ file:// with (unsupported) proxy, authentication and range
 all_proxy=http://fake:user@%HOSTIP:%HTTPPORT/
 </setenv>
 <command option="no-include">
-file://localhost/%PWD/log/test288.txt
+file://localhost%FILE_PWD/log/test288.txt
 </command>
 <file name="log/test288.txt">
 foo
diff --git a/tests/data/test299 b/tests/data/test299
index 4daaea47d..cfa743020 100644
--- a/tests/data/test299
+++ b/tests/data/test299
@@ -27,6 +27,7 @@ http
 </server>
 <features>
 ftp
+proxy
 </features>
  <name>
 FTP over HTTP proxy with user:pass not in url
diff --git a/tests/data/test317 b/tests/data/test317
index c6d8697be..68a9b5c79 100644
--- a/tests/data/test317
+++ b/tests/data/test317
@@ -67,6 +67,9 @@ HTTP with custom Authorization: and redirect to new host
  <command>
 http://first.host.it.is/we/want/that/page/317 -x %HOSTIP:%HTTPPORT -H 
"Authorization: s3cr3t" --proxy-user testing:this --location
 </command>
+<features>
+proxy
+</features>
 </client>
 
 #
diff --git a/tests/data/test318 b/tests/data/test318
index 838d1ba0f..fd82c7aed 100644
--- a/tests/data/test318
+++ b/tests/data/test318
@@ -67,6 +67,9 @@ HTTP with custom Authorization: and redirect to new host
  <command>
 http://first.host.it.is/we/want/that/page/318 -x %HOSTIP:%HTTPPORT -H 
"Authorization: s3cr3t" --proxy-user testing:this --location-trusted
 </command>
+<features>
+proxy
+</features>
 </client>
 
 #
diff --git a/tests/data/test330 b/tests/data/test330
index 74607d5ee..6cda172f6 100644
--- a/tests/data/test330
+++ b/tests/data/test330
@@ -65,6 +65,9 @@ HTTP with custom Cookie: and redirect to new host
  <command>
 http://first.host.it.is/we/want/that/page/317 -x %HOSTIP:%HTTPPORT -H "Cookie: 
test=yes" --location
 </command>
+<features>
+proxy
+</features>
 </client>
 
 #
diff --git a/tests/data/test331 b/tests/data/test331
index 54b86d2a1..2ffac81f7 100644
--- a/tests/data/test331
+++ b/tests/data/test331
@@ -41,6 +41,9 @@ HTTP with cookie using host name 'moo'
  <command>
 -x http://%HOSTIP:%HTTPPORT http://moo/we/want/331 -b none 
http://moo/we/want/3310002
 </command>
+<features>
+proxy
+</features>
 </client>
 
 # Verify data after the test has been "shot"
diff --git a/tests/data/test335 b/tests/data/test335
index 4d54da980..5817365e3 100644
--- a/tests/data/test335
+++ b/tests/data/test335
@@ -61,6 +61,7 @@ http
 <features>
 !SSPI
 crypto
+proxy
 </features>
  <name>
 HTTP with proxy Digest and site Digest with creds in URLs
diff --git a/tests/data/test105 b/tests/data/test336
similarity index 68%
copy from tests/data/test105
copy to tests/data/test336
index cc811aeb8..85477c96c 100644
--- a/tests/data/test105
+++ b/tests/data/test336
@@ -17,6 +17,9 @@ that FTP
 works
   so does it?
 </data>
+<datacheck nonewline="yes">
+data
+</datacheck>
 <servercmd>
 REPLY EPSV 500 no such command
 REPLY SIZE 500 no such command
@@ -29,24 +32,26 @@ REPLY SIZE 500 no such command
 ftp
 </server>
  <name>
-FTP user+password in URL and ASCII transfer
+FTP range download when SIZE doesn't work
  </name>
  <command>
-ftp://userdude:passfellow@%HOSTIP:%FTPPORT/105 --use-ascii
+ftp://%HOSTIP:%FTPPORT/336 --use-ascii --range 3-6
 </command>
 </client>
 
 # Verify data after the test has been "shot"
 <verify>
 <protocol>
-USER userdude
-PASS passfellow
+USER anonymous
+PASS address@hidden
 PWD
 EPSV
 PASV
 TYPE A
-SIZE 105
-RETR 105
+SIZE 336
+REST 3
+RETR 336
+ABOR
 QUIT
 </protocol>
 </verify>
diff --git a/tests/data/test1137 b/tests/data/test337
similarity index 66%
copy from tests/data/test1137
copy to tests/data/test337
index a2bfcbac1..80086dda7 100644
--- a/tests/data/test1137
+++ b/tests/data/test337
@@ -3,8 +3,8 @@
 <keywords>
 FTP
 PASV
+TYPE A
 RETR
---ignore-content-length
 </keywords>
 </info>
 # Server-side
@@ -17,8 +17,12 @@ that FTP
 works
   so does it?
 </data>
+<datacheck nonewline="yes">
+data
+</datacheck>
 <servercmd>
 REPLY EPSV 500 no such command
+REPLY SIZE 213 file: 213, Size =51
 </servercmd>
 </reply>
 
@@ -28,12 +32,11 @@ REPLY EPSV 500 no such command
 ftp
 </server>
  <name>
-FTP RETR --ignore-content-length
+FTP range download with SIZE returning extra crap
  </name>
  <command>
-ftp://%HOSTIP:%FTPPORT/1137 --ignore-content-length
+ftp://%HOSTIP:%FTPPORT/337 --use-ascii --range 3-6
 </command>
-
 </client>
 
 # Verify data after the test has been "shot"
@@ -44,8 +47,11 @@ PASS address@hidden
 PWD
 EPSV
 PASV
-TYPE I
-RETR 1137
+TYPE A
+SIZE 337
+REST 3
+RETR 337
+ABOR
 QUIT
 </protocol>
 </verify>
diff --git a/tests/data/test199 b/tests/data/test338
similarity index 69%
copy from tests/data/test199
copy to tests/data/test338
index 72675b535..f8dab6528 100644
--- a/tests/data/test199
+++ b/tests/data/test338
@@ -1,11 +1,12 @@
+# See https://github.com/curl/curl/issues/4499
 <testcase>
 <info>
 <keywords>
 HTTP
 HTTP GET
-globbing
 </keywords>
 </info>
+
 #
 # Server-side
 <reply>
@@ -17,12 +18,14 @@ Last-Modified: Tue, 13 Jun 2000 12:10:00 GMT
 ETag: "21025-dc7-39462498"
 Accept-Ranges: bytes
 Content-Length: 6
-Connection: close
 Content-Type: text/html
 Funny-head: yesyes
 
 -foo-
 </data>
+<servercmd>
+connection-monitor
+</servercmd>
 </reply>
 
 #
@@ -32,10 +35,10 @@ Funny-head: yesyes
 http
 </server>
  <name>
-HTTP with -d, -G and {}
+ANYAUTH connection reuse of non-authed connection
  </name>
  <command>
--d "foo=moo&moo=poo" "http://%HOSTIP:%HTTPPORT/{199,199}"; -G
+http://%HOSTIP:%HTTPPORT/338 --next http://%HOSTIP:%HTTPPORT/338 --anyauth -u 
foo:moo
 </command>
 </client>
 
@@ -46,14 +49,15 @@ HTTP with -d, -G and {}
 ^User-Agent:.*
 </strip>
 <protocol>
-GET /199?foo=moo&moo=poo HTTP/1.1
+GET /338 HTTP/1.1
 Host: %HOSTIP:%HTTPPORT
 Accept: */*
 
-GET /199?foo=moo&moo=poo HTTP/1.1
+GET /338 HTTP/1.1
 Host: %HOSTIP:%HTTPPORT
 Accept: */*
 
+[DISCONNECT]
 </protocol>
 </verify>
 </testcase>
diff --git a/tests/data/test356 b/tests/data/test356
index 1be05fe6f..c1234b450 100644
--- a/tests/data/test356
+++ b/tests/data/test356
@@ -61,7 +61,7 @@ Accept: */*
 # matches
 s/\"([^\"]*)\"/TIMESTAMP/
 </stripfile>
-<file name="log/altsvc-356">
+<file name="log/altsvc-356" mode="text">
 # Your alt-svc cache. https://curl.haxx.se/docs/alt-svc.html
 # This file was generated by libcurl! Edit at your own risk.
 h1 %HOSTIP %HTTPPORT h1 nowhere.foo 81 TIMESTAMP 0 0
diff --git a/tests/data/test43 b/tests/data/test43
index e5535bb3a..196017013 100644
--- a/tests/data/test43
+++ b/tests/data/test43
@@ -56,6 +56,9 @@ HTTP Location: following over HTTP proxy
  <command>
 http://%HOSTIP:%HTTPPORT/want/43 -L -x %HOSTIP:%HTTPPORT
 </command>
+<features>
+proxy
+</features>
 </client>
 
 # Verify data after the test has been "shot"
diff --git a/tests/data/test5 b/tests/data/test5
index b62f1a127..b98d27b3b 100644
--- a/tests/data/test5
+++ b/tests/data/test5
@@ -31,6 +31,9 @@ HTTP over proxy
  <command>
 http://%HOSTIP:%HTTPPORT/we/want/that/page/5#5 -x %HOSTIP:%HTTPPORT
 </command>
+<features>
+proxy
+</features>
 </client>
 
 #
diff --git a/tests/data/test503 b/tests/data/test503
index e8dc21e8d..472149d2a 100644
--- a/tests/data/test503
+++ b/tests/data/test503
@@ -50,7 +50,9 @@ http-proxy
 <tool>
 lib503
 </tool>
-
+<features>
+proxy
+</features>
  <name>
 simple multi http:// through proxytunnel with authentication info
  </name>
diff --git a/tests/data/test504 b/tests/data/test504
index 2d3a3dd0d..7c92209cd 100644
--- a/tests/data/test504
+++ b/tests/data/test504
@@ -6,6 +6,7 @@ HTTP GET
 HTTP proxy
 multi
 FAILURE
+connect to non-listen
 </keywords>
 </info>
 
@@ -20,6 +21,7 @@ none
 </server>
 <features>
 http
+proxy
 </features>
 # tool is what to use instead of 'curl'
 <tool>
diff --git a/tests/data/test506 b/tests/data/test506
index 8f06e0e4f..f821ad10e 100644
--- a/tests/data/test506
+++ b/tests/data/test506
@@ -115,94 +115,98 @@ CURLOPT_SHARE
 lock:   share  [Pigs in space]: 16
 unlock: share  [Pigs in space]: 17
 PERFORM
-lock:   dns    [Pigs in space]: 18
-unlock: dns    [Pigs in space]: 19
+lock:   cookie [Pigs in space]: 18
+unlock: cookie [Pigs in space]: 19
 lock:   dns    [Pigs in space]: 20
 unlock: dns    [Pigs in space]: 21
-lock:   cookie [Pigs in space]: 22
-unlock: cookie [Pigs in space]: 23
+lock:   dns    [Pigs in space]: 22
+unlock: dns    [Pigs in space]: 23
 lock:   cookie [Pigs in space]: 24
 unlock: cookie [Pigs in space]: 25
 lock:   cookie [Pigs in space]: 26
 unlock: cookie [Pigs in space]: 27
 lock:   cookie [Pigs in space]: 28
 unlock: cookie [Pigs in space]: 29
+lock:   cookie [Pigs in space]: 30
+unlock: cookie [Pigs in space]: 31
 run 1: set cookie 1, 2 and 3
-lock:   dns    [Pigs in space]: 30
-unlock: dns    [Pigs in space]: 31
 lock:   dns    [Pigs in space]: 32
 unlock: dns    [Pigs in space]: 33
+lock:   dns    [Pigs in space]: 34
+unlock: dns    [Pigs in space]: 35
 CLEANUP
-lock:   cookie [Pigs in space]: 34
-unlock: cookie [Pigs in space]: 35
-lock:   share  [Pigs in space]: 36
-unlock: share  [Pigs in space]: 37
-*** run 2
-CURLOPT_SHARE
+lock:   cookie [Pigs in space]: 36
+unlock: cookie [Pigs in space]: 37
 lock:   share  [Pigs in space]: 38
 unlock: share  [Pigs in space]: 39
+*** run 2
+CURLOPT_SHARE
+lock:   share  [Pigs in space]: 40
+unlock: share  [Pigs in space]: 41
 PERFORM
-lock:   dns    [Pigs in space]: 40
-unlock: dns    [Pigs in space]: 41
 lock:   cookie [Pigs in space]: 42
 unlock: cookie [Pigs in space]: 43
-lock:   cookie [Pigs in space]: 44
-unlock: cookie [Pigs in space]: 45
+lock:   dns    [Pigs in space]: 44
+unlock: dns    [Pigs in space]: 45
 lock:   cookie [Pigs in space]: 46
 unlock: cookie [Pigs in space]: 47
+lock:   cookie [Pigs in space]: 48
+unlock: cookie [Pigs in space]: 49
+lock:   cookie [Pigs in space]: 50
+unlock: cookie [Pigs in space]: 51
 run 2: set cookie 4 and 5
-lock:   dns    [Pigs in space]: 48
-unlock: dns    [Pigs in space]: 49
-lock:   dns    [Pigs in space]: 50
-unlock: dns    [Pigs in space]: 51
+lock:   dns    [Pigs in space]: 52
+unlock: dns    [Pigs in space]: 53
+lock:   dns    [Pigs in space]: 54
+unlock: dns    [Pigs in space]: 55
 CLEANUP
-lock:   cookie [Pigs in space]: 52
-unlock: cookie [Pigs in space]: 53
-lock:   share  [Pigs in space]: 54
-unlock: share  [Pigs in space]: 55
+lock:   cookie [Pigs in space]: 56
+unlock: cookie [Pigs in space]: 57
+lock:   share  [Pigs in space]: 58
+unlock: share  [Pigs in space]: 59
 *** run 3
 CURLOPT_SHARE
-lock:   share  [Pigs in space]: 56
-unlock: share  [Pigs in space]: 57
+lock:   share  [Pigs in space]: 60
+unlock: share  [Pigs in space]: 61
 CURLOPT_COOKIEJAR
 CURLOPT_COOKIELIST FLUSH
-lock:   cookie [Pigs in space]: 58
-unlock: cookie [Pigs in space]: 59
-PERFORM
-lock:   dns    [Pigs in space]: 60
-unlock: dns    [Pigs in space]: 61
 lock:   cookie [Pigs in space]: 62
 unlock: cookie [Pigs in space]: 63
-lock:   cookie [Pigs in space]: 64
-unlock: cookie [Pigs in space]: 65
+PERFORM
+lock:   dns    [Pigs in space]: 64
+unlock: dns    [Pigs in space]: 65
 lock:   cookie [Pigs in space]: 66
 unlock: cookie [Pigs in space]: 67
 lock:   cookie [Pigs in space]: 68
 unlock: cookie [Pigs in space]: 69
 lock:   cookie [Pigs in space]: 70
 unlock: cookie [Pigs in space]: 71
+lock:   cookie [Pigs in space]: 72
+unlock: cookie [Pigs in space]: 73
+lock:   cookie [Pigs in space]: 74
+unlock: cookie [Pigs in space]: 75
 run 3: overwrite cookie 1 and 4, set cookie 6 with and without tailmatch
-lock:   dns    [Pigs in space]: 72
-unlock: dns    [Pigs in space]: 73
-lock:   dns    [Pigs in space]: 74
-unlock: dns    [Pigs in space]: 75
+lock:   dns    [Pigs in space]: 76
+unlock: dns    [Pigs in space]: 77
+lock:   dns    [Pigs in space]: 78
+unlock: dns    [Pigs in space]: 79
 CLEANUP
-lock:   cookie [Pigs in space]: 76
-unlock: cookie [Pigs in space]: 77
-lock:   share  [Pigs in space]: 78
-unlock: share  [Pigs in space]: 79
+lock:   cookie [Pigs in space]: 80
+unlock: cookie [Pigs in space]: 81
+lock:   share  [Pigs in space]: 82
+unlock: share  [Pigs in space]: 83
 CURLOPT_SHARE
-lock:   share  [Pigs in space]: 80
-unlock: share  [Pigs in space]: 81
+lock:   share  [Pigs in space]: 84
+unlock: share  [Pigs in space]: 85
 CURLOPT_COOKIELIST ALL
-lock:   cookie [Pigs in space]: 82
-unlock: cookie [Pigs in space]: 83
-CURLOPT_COOKIEJAR
-CURLOPT_COOKIELIST RELOAD
-lock:   cookie [Pigs in space]: 84
-unlock: cookie [Pigs in space]: 85
 lock:   cookie [Pigs in space]: 86
 unlock: cookie [Pigs in space]: 87
+CURLOPT_COOKIEJAR
+CURLOPT_COOKIELIST RELOAD
+lock:   cookie [Pigs in space]: 88
+unlock: cookie [Pigs in space]: 89
+lock:   cookie [Pigs in space]: 90
+unlock: cookie [Pigs in space]: 91
 loaded cookies:
 -----------------
   www.host.foo.com     FALSE   /       FALSE   1993463787      test6   six_more
@@ -215,17 +219,17 @@ loaded cookies:
   .host.foo.com        TRUE    /       FALSE   1896263787      injected        
yes
 -----------------
 try SHARE_CLEANUP...
-lock:   share  [Pigs in space]: 88
-unlock: share  [Pigs in space]: 89
-SHARE_CLEANUP failed, correct
-CLEANUP
-lock:   cookie [Pigs in space]: 90
-unlock: cookie [Pigs in space]: 91
 lock:   share  [Pigs in space]: 92
 unlock: share  [Pigs in space]: 93
+SHARE_CLEANUP failed, correct
+CLEANUP
+lock:   cookie [Pigs in space]: 94
+unlock: cookie [Pigs in space]: 95
+lock:   share  [Pigs in space]: 96
+unlock: share  [Pigs in space]: 97
 SHARE_CLEANUP
-lock:   share  [Pigs in space]: 94
-unlock: share  [Pigs in space]: 95
+lock:   share  [Pigs in space]: 98
+unlock: share  [Pigs in space]: 99
 GLOBAL_CLEANUP
 </stdout>
 <file name="log/jar506" mode="text">
diff --git a/tests/data/test523 b/tests/data/test523
index 665211d48..c00a0969d 100644
--- a/tests/data/test523
+++ b/tests/data/test523
@@ -41,6 +41,9 @@ HTTP GET with proxy and CURLOPT_PORT
  <command>
 http://www.example.com:999/523 http://%HOSTIP:%HTTPPORT
 </command>
+<features>
+proxy
+</features>
 </client>
 
 #
diff --git a/tests/data/test539 b/tests/data/test539
index e9aadd1f3..a69834012 100644
--- a/tests/data/test539
+++ b/tests/data/test539
@@ -64,7 +64,7 @@ SYST
 CWD /
 EPSV
 TYPE A
-LIST path/to/the/file/539./
+LIST path/to/the/file/539.
 QUIT
 </protocol>
 </verify>
diff --git a/tests/data/test540 b/tests/data/test540
index 8391cbe78..871c558fb 100644
--- a/tests/data/test540
+++ b/tests/data/test540
@@ -67,6 +67,7 @@ lib540
 <features>
 !SSPI
 crypto
+proxy
 </features>
  <name>
 HTTP proxy auth Digest multi API re-using connection
diff --git a/tests/data/test547 b/tests/data/test547
index 5c4cfaaff..781799b11 100644
--- a/tests/data/test547
+++ b/tests/data/test547
@@ -78,6 +78,7 @@ lib547
 NTLM
 !SSPI
 debug
+proxy
 </features>
  <name>
 HTTP proxy auth NTLM with POST data from read callback
diff --git a/tests/data/test548 b/tests/data/test548
index 80b87d10c..fa98cd437 100644
--- a/tests/data/test548
+++ b/tests/data/test548
@@ -78,6 +78,7 @@ lib548
 NTLM
 !SSPI
 debug
+proxy
 </features>
  <name>
 HTTP proxy auth NTLM with POST data from CURLOPT_POSTFIELDS
diff --git a/tests/data/test549 b/tests/data/test549
index a248edbf6..a9f1ca21c 100644
--- a/tests/data/test549
+++ b/tests/data/test549
@@ -32,6 +32,7 @@ http
 </server>
 <features>
 ftp
+proxy
 </features>
 <tool>
 lib549
diff --git a/tests/data/test550 b/tests/data/test550
index a609aa216..1eff72a17 100644
--- a/tests/data/test550
+++ b/tests/data/test550
@@ -32,6 +32,7 @@ http
 </server>
 <features>
 ftp
+proxy
 </features>
 <tool>
 lib549
diff --git a/tests/data/test551 b/tests/data/test551
index ed6aee264..bb31a36f8 100644
--- a/tests/data/test551
+++ b/tests/data/test551
@@ -63,6 +63,7 @@ lib547
 <features>
 !SSPI
 crypto
+proxy
 </features>
  <name>
 HTTP proxy auth Digest with POST data from read callback
diff --git a/tests/data/test555 b/tests/data/test555
index f8b929839..d4b946614 100644
--- a/tests/data/test555
+++ b/tests/data/test555
@@ -83,6 +83,7 @@ lib555
 NTLM
 !SSPI
 debug
+proxy
 </features>
  <name>
 HTTP proxy auth NTLM with POST data from read callback multi-if
diff --git a/tests/data/test561 b/tests/data/test561
index a6188eacf..359e54cca 100644
--- a/tests/data/test561
+++ b/tests/data/test561
@@ -33,6 +33,7 @@ http
 </server>
 <features>
 ftp
+proxy
 </features>
 <tool>
 lib549
diff --git a/tests/data/test563 b/tests/data/test563
index c9df79219..eb9372ed0 100644
--- a/tests/data/test563
+++ b/tests/data/test563
@@ -32,7 +32,9 @@ lib562
  <name>
 FTP type=A URL and CURLOPT_PORT set and proxy
  </name>
-
+<features>
+proxy
+</features>
 <setenv>
 ftp_proxy=http://%HOSTIP:%HTTPPORT/
 </setenv>
diff --git a/tests/data/test564 b/tests/data/test564
index 4c9ecd466..3078e2d08 100644
--- a/tests/data/test564
+++ b/tests/data/test564
@@ -38,6 +38,9 @@ FTP RETR a file over a SOCKS proxy using the multi interface
 <command>
 ftp://%HOSTIP:%FTPPORT/path/564 %HOSTIP:%SOCKSPORT
 </command>
+<features>
+proxy
+</features>
 </client>
 
 # Verify data after the test has been "shot"
diff --git a/tests/data/test590 b/tests/data/test590
index 7f30b83ef..6f6250763 100644
--- a/tests/data/test590
+++ b/tests/data/test590
@@ -76,6 +76,7 @@ lib590
 NTLM
 !SSPI
 debug
+proxy
 </features>
  <name>
 HTTP proxy offers Negotiate+NTLM, use only NTLM
diff --git a/tests/data/test62 b/tests/data/test62
index 2784a0f61..82bc0d783 100644
--- a/tests/data/test62
+++ b/tests/data/test62
@@ -29,7 +29,7 @@ http
 HTTP, send cookies when using custom Host:
  </name>
  <command>
-http://%HOSTIP:%HTTPPORT/we/want/62 http://%HOSTIP:%HTTPPORT/we/want?hoge=fuga 
-b log/jar62.txt -H "Host: www.host.foo.com"
+http://%HOSTIP:%HTTPPORT/we/want/62 
http://%HOSTIP:%HTTPPORT/we/want/62?hoge=fuga -b log/jar62.txt -H "Host: 
www.host.foo.com"
 </command>
 <file name="log/jar62.txt">
 # Netscape HTTP Cookie File
@@ -55,7 +55,7 @@ Host: www.host.foo.com
 Accept: */*
 Cookie: test2=yes; test=yes
 
-GET /we/want?hoge=fuga HTTP/1.1
+GET /we/want/62?hoge=fuga HTTP/1.1
 Host: www.host.foo.com
 Accept: */*
 Cookie: test2=yes; test=yes
diff --git a/tests/data/test63 b/tests/data/test63
index ccc19dd24..e7d7a4615 100644
--- a/tests/data/test63
+++ b/tests/data/test63
@@ -33,6 +33,9 @@ http_proxy=http://fake:user@%HOSTIP:%HTTPPORT/
  <command>
 http://we.want.that.site.com/63
 </command>
+<features>
+proxy
+</features>
 </client>
 
 # Verify data after the test has been "shot"
diff --git a/tests/data/test659 b/tests/data/test659
index 43e1aaf92..048c0d0f2 100644
--- a/tests/data/test659
+++ b/tests/data/test659
@@ -36,6 +36,9 @@ CURLOPT_CURLU without the path set - over proxy
  <command>
 http://%HOSTIP:%HTTPPORT
 </command>
+<features>
+proxy
+</features>
 </client>
 
 <verify>
diff --git a/tests/data/test520 b/tests/data/test661
similarity index 53%
copy from tests/data/test520
copy to tests/data/test661
index 755caebeb..067612be5 100644
--- a/tests/data/test520
+++ b/tests/data/test661
@@ -9,11 +9,7 @@ FTP
 # Server-side
 <reply>
 <data>
-contents of file
 </data>
-<servercmd>
-REPLY MDTM 213 20030405060708
-</servercmd>
 </reply>
 
 # Client-side
@@ -23,30 +19,54 @@ ftp
 </server>
 # tool is what to use instead of 'curl'
 <tool>
-lib520
+lib661
 </tool>
 
  <name>
-FTP RETR with FILETIME
+Avoid redundant CWDs
  </name>
  <command>
-ftp://%HOSTIP:%FTPPORT/520
+ftp://%HOSTIP:%FTPPORT/
 </command>
 </client>
 
 #
 # Verify data after the test has been "shot"
 <verify>
-
 <protocol>
 USER anonymous
 PASS address@hidden
 PWD
-MDTM 520
+CWD /folderA
 EPSV
 TYPE I
-SIZE 520
-RETR 520
+RETR 661
+CWD /folderB
+EPSV
+RETR 661
+QUIT
+USER anonymous
+PASS address@hidden
+PWD
+EPSV
+TYPE I
+RETR /folderA/661
+CWD /folderB
+EPSV
+RETR 661
+EPSV
+RETR /folderA/661
+QUIT
+USER anonymous
+PASS address@hidden
+PWD
+SYST
+QUIT
+USER anonymous
+PASS address@hidden
+PWD
+SYST
+SYST
 QUIT
 </protocol>
 </verify>
diff --git a/tests/data/test1011 b/tests/data/test662
similarity index 53%
copy from tests/data/test1011
copy to tests/data/test662
index 566867dd6..53d97c39d 100644
--- a/tests/data/test1011
+++ b/tests/data/test662
@@ -2,7 +2,7 @@
 <info>
 <keywords>
 HTTP
-HTTP POST
+HTTP GET
 followlocation
 </keywords>
 </info>
@@ -10,30 +10,30 @@ followlocation
 # Server-side
 <reply>
 <data>
-HTTP/1.1 301 OK
-Location: moo.html&testcase=/10110002
+HTTP/1.1 302 OK
+Location: http://example.net/tes t case=/6620002
 Date: Thu, 09 Nov 2010 14:49:00 GMT
 Content-Length: 0
 
 </data>
 <data2>
-HTTP/1.1 200 OK swsclose
+HTTP/1.1 200 OK
 Location: this should be ignored
 Date: Thu, 09 Nov 2010 14:49:00 GMT
-Connection: close
+Content-Length: 5
 
 body
 </data2>
 <datacheck>
-HTTP/1.1 301 OK
-Location: moo.html&testcase=/10110002
+HTTP/1.1 302 OK
+Location: http://example.net/tes t case=/6620002
 Date: Thu, 09 Nov 2010 14:49:00 GMT
 Content-Length: 0
 
-HTTP/1.1 200 OK swsclose
+HTTP/1.1 200 OK
 Location: this should be ignored
 Date: Thu, 09 Nov 2010 14:49:00 GMT
-Connection: close
+Content-Length: 5
 
 body
 </datacheck>
@@ -46,11 +46,14 @@ body
 http
 </server>
  <name>
-HTTP POST with 301 redirect
+HTTP redirect with whitespace in absolute Location: URL
  </name>
  <command>
-http://%HOSTIP:%HTTPPORT/blah/1011 -L -d "moo"
+http://example.com/please/gimme/662 -L -x http://%HOSTIP:%HTTPPORT
 </command>
+<features>
+proxy
+</features>
 </client>
 
 #
@@ -60,16 +63,15 @@ http://%HOSTIP:%HTTPPORT/blah/1011 -L -d "moo"
 ^User-Agent:.*
 </strip>
 <protocol>
-POST /blah/1011 HTTP/1.1
-Host: %HOSTIP:%HTTPPORT
+GET http://example.com/please/gimme/662 HTTP/1.1
+Host: example.com
 Accept: */*
-Content-Length: 3
-Content-Type: application/x-www-form-urlencoded
+Proxy-Connection: Keep-Alive
 
-mooGET /blah/moo.html&testcase=/10110002 HTTP/1.1
-User-Agent: curl/7.10 (i686-pc-linux-gnu) libcurl/7.10 OpenSSL/0.9.6c ipv6 
zlib/1.1.3
-Host: %HOSTIP:%HTTPPORT
+GET http://example.net/tes%20t%20case=/6620002 HTTP/1.1
+Host: example.net
 Accept: */*
+Proxy-Connection: Keep-Alive
 
 </protocol>
 </verify>
diff --git a/tests/data/test663 b/tests/data/test663
new file mode 100644
index 000000000..6743b3258
--- /dev/null
+++ b/tests/data/test663
@@ -0,0 +1,82 @@
+#
+# This test is crafted to reproduce oss-fuzz bug
+# https://crbug.com/oss-fuzz/17954
+#
+<testcase>
+<info>
+<keywords>
+HTTP
+HTTP GET
+followlocation
+</keywords>
+</info>
+#
+# Server-side
+<reply>
+<data>
+HTTP/1.1 302 OK
+Location: http://example.net/there/it/is/../../tes t case=/6630002? yes no
+Date: Thu, 09 Nov 2010 14:49:00 GMT
+Content-Length: 0
+
+</data>
+<data2>
+HTTP/1.1 200 OK
+Location: this should be ignored
+Date: Thu, 09 Nov 2010 14:49:00 GMT
+Content-Length: 5
+
+body
+</data2>
+<datacheck>
+HTTP/1.1 302 OK
+Location: http://example.net/there/it/is/../../tes t case=/6630002? yes no
+Date: Thu, 09 Nov 2010 14:49:00 GMT
+Content-Length: 0
+
+HTTP/1.1 200 OK
+Location: this should be ignored
+Date: Thu, 09 Nov 2010 14:49:00 GMT
+Content-Length: 5
+
+body
+</datacheck>
+</reply>
+
+#
+# Client-side
+<client>
+<server>
+http
+</server>
+ <name>
+HTTP redirect with dotdots and whitespaces in absolute Location: URL
+ </name>
+ <command>
+http://example.com/please/../gimme/663?foobar#hello -L -x 
http://%HOSTIP:%HTTPPORT
+</command>
+<features>
+proxy
+</features>
+</client>
+
+#
+# Verify data after the test has been "shot"
+<verify>
+<strip>
+^User-Agent:.*
+</strip>
+<protocol>
+GET http://example.com/gimme/663?foobar HTTP/1.1
+Host: example.com
+Accept: */*
+Proxy-Connection: Keep-Alive
+
+GET http://example.net/there/tes%20t%20case=/6630002?+yes+no HTTP/1.1
+Host: example.net
+Accept: */*
+Proxy-Connection: Keep-Alive
+
+</protocol>
+</verify>
+</testcase>
diff --git a/tests/data/test702 b/tests/data/test702
index 9fc954a02..c03723676 100644
--- a/tests/data/test702
+++ b/tests/data/test702
@@ -25,6 +25,7 @@ socks4
 </server>
 <features>
 http
+proxy
 </features>
  <name>
 Attempt connect to non-listening HTTP server via SOCKS4 proxy
diff --git a/tests/data/test703 b/tests/data/test703
index 3c0fb314d..53d6a0222 100644
--- a/tests/data/test703
+++ b/tests/data/test703
@@ -25,6 +25,7 @@ socks5
 </server>
 <features>
 http
+proxy
 </features>
  <name>
 Attempt connect to non-listening HTTP server via SOCKS5 proxy
diff --git a/tests/data/test704 b/tests/data/test704
index 15a1b6701..7f891fa95 100644
--- a/tests/data/test704
+++ b/tests/data/test704
@@ -23,8 +23,11 @@ http
 Attempt connect to non-listening SOCKS4 proxy
  </name>
  <command>
---socks4 %HOSTIP:60000 http://%HOSTIP:%HTTPPORT/704
+--socks4 %HOSTIP:2 http://%HOSTIP:%HTTPPORT/704
 </command>
+<features>
+proxy
+</features>
 </client>
 
 # Verify data after the test has been "shot"
diff --git a/tests/data/test705 b/tests/data/test705
index 3b904c6b1..cfbf3419f 100644
--- a/tests/data/test705
+++ b/tests/data/test705
@@ -23,8 +23,11 @@ http
 Attempt connect to non-listening SOCKS5 proxy
  </name>
  <command>
---socks5 %HOSTIP:60000 http://%HOSTIP:%HTTPPORT/705
+--socks5 %HOSTIP:2 http://%HOSTIP:%HTTPPORT/705
 </command>
+<features>
+proxy
+</features>
 </client>
 
 # Verify data after the test has been "shot"
diff --git a/tests/data/test714 b/tests/data/test714
index efec03227..776d8b292 100644
--- a/tests/data/test714
+++ b/tests/data/test714
@@ -41,6 +41,7 @@ http-proxy
 </server>
 <features>
 http
+proxy
 </features>
  <name>
 FTP fetch with --proxy set to http:// and with --connect-to
diff --git a/tests/data/test715 b/tests/data/test715
index 56936b946..85372ca24 100644
--- a/tests/data/test715
+++ b/tests/data/test715
@@ -43,6 +43,7 @@ socks5
 </server>
 <features>
 http
+proxy
 </features>
  <name>
 FTP fetch with --preproxy, --proxy and --connect-to
diff --git a/tests/data/test716 b/tests/data/test716
index db61dcb39..96167de5c 100644
--- a/tests/data/test716
+++ b/tests/data/test716
@@ -23,6 +23,7 @@ socks5
 </server>
 <features>
 http
+proxy
 </features>
 <name>
 SOCKS5 proxy with too long user name
diff --git a/tests/data/test717 b/tests/data/test717
index 35392443e..dae50d9f2 100644
--- a/tests/data/test717
+++ b/tests/data/test717
@@ -47,6 +47,9 @@ SOCKS5 proxy auth
  <command>
 http://%HOSTIP:1/717 -x socks5://uz3r:p4ssworm@%HOSTIP:%SOCKSPORT
 </command>
+<features>
+proxy
+</features>
 </client>
 
 #
diff --git a/tests/data/test79 b/tests/data/test79
index b2566e229..9bc836681 100644
--- a/tests/data/test79
+++ b/tests/data/test79
@@ -29,6 +29,7 @@ http
 </server>
 <features>
 ftp
+proxy
 </features>
  <name>
 FTP over HTTP proxy
diff --git a/tests/data/test80 b/tests/data/test80
index 147a6aa12..3e61eddde 100644
--- a/tests/data/test80
+++ b/tests/data/test80
@@ -55,6 +55,9 @@ HTTP 1.0 CONNECT with proxytunnel and proxy+host Basic 
authentication
  <command>
 http://test.80:%HTTPPORT/we/want/that/page/80 -p --proxy1.0 %HOSTIP:%PROXYPORT 
--user iam:myself --proxy-user youare:yourself
 </command>
+<features>
+proxy
+</features>
 </client>
 
 #
diff --git a/tests/data/test81 b/tests/data/test81
index 499831bb3..4cc03975e 100644
--- a/tests/data/test81
+++ b/tests/data/test81
@@ -57,6 +57,7 @@ Finally, this is the real page!
 NTLM
 !SSPI
 debug
+proxy
 </features>
 <server>
 http
diff --git a/tests/data/test82 b/tests/data/test82
index 8b58f75da..88d5da84d 100644
--- a/tests/data/test82
+++ b/tests/data/test82
@@ -26,6 +26,7 @@ This is not the real page either!
 # Client-side
 <client>
 <features>
+proxy
 </features>
 <server>
 http
diff --git a/tests/data/test83 b/tests/data/test83
index 120bcc6a1..400e0a0f3 100644
--- a/tests/data/test83
+++ b/tests/data/test83
@@ -52,6 +52,9 @@ HTTP over proxy-tunnel with site authentication
  <command>
 http://test.83:%HTTPPORT/we/want/that/page/83 -p -x %HOSTIP:%PROXYPORT --user 
'iam:my:;self'
 </command>
+<features>
+proxy
+</features>
 </client>
 
 #
diff --git a/tests/data/test84 b/tests/data/test84
index 629dae2fc..4cfde6dbb 100644
--- a/tests/data/test84
+++ b/tests/data/test84
@@ -33,6 +33,9 @@ HTTP over proxy with site authentication
  <command>
 http://%HOSTIP:%HTTPPORT/we/want/that/page/84 -x %HOSTIP:%HTTPPORT --user 
iam:myself
 </command>
+<features>
+proxy
+</features>
 </client>
 
 #
diff --git a/tests/data/test85 b/tests/data/test85
index cb5e6e052..8b4cd6abc 100644
--- a/tests/data/test85
+++ b/tests/data/test85
@@ -36,6 +36,9 @@ HTTP over proxy with site and proxy authentication
  <command>
 http://%HOSTIP:%HTTPPORT/we/want/that/page/85 -x %HOSTIP:%HTTPPORT --user 
iam:myself --proxy-user testing:this
 </command>
+<features>
+proxy
+</features>
 </client>
 
 #
diff --git a/tests/data/test93 b/tests/data/test93
index 138724835..58e47bc6b 100644
--- a/tests/data/test93
+++ b/tests/data/test93
@@ -31,6 +31,9 @@ HTTP GET with failed proxy auth
  <command>
 http://%HOSTIP:%HTTPPORT/93 -x %HOSTIP:%HTTPPORT
 </command>
+<features>
+proxy
+</features>
 </client>
 
 #
diff --git a/tests/data/test94 b/tests/data/test94
index 2f3f4824d..4ca53c63b 100644
--- a/tests/data/test94
+++ b/tests/data/test94
@@ -29,6 +29,7 @@ http
 </server>
 <features>
 SSL
+proxy
 </features>
  <name>
 HTTPS GET with failed proxy auth (CONNECT 1.0)
diff --git a/tests/data/test95 b/tests/data/test95
index 1cd88acab..afc00aede 100644
--- a/tests/data/test95
+++ b/tests/data/test95
@@ -52,6 +52,9 @@ HTTP over proxytunnel using POST
  <command>
 http://test.95:%HTTPPORT/we/want/that/page/95 -p -x %HOSTIP:%PROXYPORT -d 
"datatopost=ohthatsfunyesyes"
 </command>
+<features>
+proxy
+</features>
 </client>
 
 #
diff --git a/tests/libtest/Makefile.inc b/tests/libtest/Makefile.inc
index 4ea9cf2a7..9ba72d7de 100644
--- a/tests/libtest/Makefile.inc
+++ b/tests/libtest/Makefile.inc
@@ -22,7 +22,7 @@ noinst_PROGRAMS = chkhostname libauthretry libntlmconnect     
           \
  lib571 lib572 lib573 lib574 lib575 lib576        lib578 lib579 lib582   \
  lib583 lib585 lib586 lib587 lib589 lib590 lib591 lib597 lib598 lib599   \
  lib643 lib644 lib645 lib650 lib651 lib652 lib653 lib654 lib655 lib658   \
- lib659 \
+ lib659 lib661 \
  lib1156 \
  lib1500 lib1501 lib1502 lib1503 lib1504 lib1505 lib1506 lib1507 lib1508 \
  lib1509 lib1510 lib1511 lib1512 lib1513 lib1514 lib1515         lib1517 \
@@ -33,7 +33,7 @@ noinst_PROGRAMS = chkhostname libauthretry libntlmconnect     
           \
  lib1550 lib1551 lib1552 lib1553 lib1554 lib1555 lib1556 lib1557 \
  lib1558 lib1559 lib1560 \
  lib1591 lib1592 lib1593 lib1594 lib1596 \
- lib1900 lib1905 lib1906 \
+ lib1900 lib1905 lib1906 lib1907 \
  lib2033
 
 chkdecimalpoint_SOURCES = chkdecimalpoint.c ../../lib/mprintf.c \
@@ -345,6 +345,9 @@ lib659_SOURCES = lib659.c $(SUPPORTFILES) $(TESTUTIL) 
$(WARNLESS)
 lib659_LDADD = $(TESTUTIL_LIBS)
 lib659_CPPFLAGS = $(AM_CPPFLAGS)
 
+lib661_SOURCES = lib661.c $(SUPPORTFILES)
+lib661_CPPFLAGS = $(AM_CPPFLAGS)
+
 lib1500_SOURCES = lib1500.c $(SUPPORTFILES) $(TESTUTIL)
 lib1500_LDADD = $(TESTUTIL_LIBS)
 lib1500_CPPFLAGS = $(AM_CPPFLAGS)
@@ -563,6 +566,10 @@ lib1906_SOURCES = lib1906.c $(SUPPORTFILES) $(TESTUTIL) 
$(WARNLESS)
 lib1906_LDADD = $(TESTUTIL_LIBS)
 lib1906_CPPFLAGS = $(AM_CPPFLAGS)
 
+lib1907_SOURCES = lib1907.c $(SUPPORTFILES) $(TESTUTIL) $(WARNLESS)
+lib1907_LDADD = $(TESTUTIL_LIBS)
+lib1907_CPPFLAGS = $(AM_CPPFLAGS)
+
 lib2033_SOURCES = libntlmconnect.c $(SUPPORTFILES) $(TESTUTIL) $(WARNLESS)
 lib2033_LDADD = $(TESTUTIL_LIBS)
 lib2033_CPPFLAGS = $(AM_CPPFLAGS) -DUSE_PIPELINING
diff --git a/tests/libtest/lib1156.c b/tests/libtest/lib1156.c
index f4385b26a..df6062c56 100644
--- a/tests/libtest/lib1156.c
+++ b/tests/libtest/lib1156.c
@@ -5,7 +5,7 @@
  *                            | (__| |_| |  _ <| |___
  *                             \___|\___/|_| \_\_____|
  *
- * Copyright (C) 1998 - 2018, Daniel Stenberg, <address@hidden>, et al.
+ * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
  *
  * This software is licensed as described in the file COPYING, which
  * you should have received as part of this distribution. The terms
diff --git a/tests/libtest/lib1522.c b/tests/libtest/lib1522.c
index 3675175ee..6df152f1f 100644
--- a/tests/libtest/lib1522.c
+++ b/tests/libtest/lib1522.c
@@ -5,7 +5,7 @@
  *                            | (__| |_| |  _ <| |___
  *                             \___|\___/|_| \_\_____|
  *
- * Copyright (C) 1998 - 2017, Daniel Stenberg, <address@hidden>, et al.
+ * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
  *
  * This software is licensed as described in the file COPYING, which
  * you should have received as part of this distribution. The terms
diff --git a/tests/libtest/lib1560.c b/tests/libtest/lib1560.c
index 85884474e..7f8accc7d 100644
--- a/tests/libtest/lib1560.c
+++ b/tests/libtest/lib1560.c
@@ -140,6 +140,26 @@ static struct testcase get_parts_list[] ={
    "file | [11] | [12] | [13] | [14] | [15] | C:\\programs\\foo | [16] | [17]",
    CURLU_DEFAULT_SCHEME, 0, CURLUE_OK},
 #endif
+  {"https://example.com/color/#green?no-black";,
+   "https | [11] | [12] | [13] | example.com | [15] | /color/ | [16] | "
+   "green?no-black",
+   CURLU_DEFAULT_SCHEME, 0, CURLUE_OK },
+  {"https://example.com/color/#green#no-black";,
+   "https | [11] | [12] | [13] | example.com | [15] | /color/ | [16] | "
+   "green#no-black",
+   CURLU_DEFAULT_SCHEME, 0, CURLUE_OK },
+  {"https://example.com/color/?green#no-black";,
+   "https | [11] | [12] | [13] | example.com | [15] | /color/ | green | "
+   "no-black",
+   CURLU_DEFAULT_SCHEME, 0, CURLUE_OK },
+  {"https://example.com/#color/?green#no-black";,
+   "https | [11] | [12] | [13] | example.com | [15] | / | [16] | "
+   "color/?green#no-black",
+   CURLU_DEFAULT_SCHEME, 0, CURLUE_OK },
+  {"https://example.#com/color/?green#no-black";,
+   "https | [11] | [12] | [13] | example. | [15] | / | [16] | "
+   "com/color/?green#no-black",
+   CURLU_DEFAULT_SCHEME, 0, CURLUE_OK },
   {"http://[ab.be:1]/x";, "",
    CURLU_DEFAULT_SCHEME, 0, CURLUE_MALFORMED_INPUT},
   {"http://[ab.be]/x";, "",
@@ -414,6 +434,18 @@ static struct urltestcase get_url_list[] = {
   {"tp://example.com/path/html",
    "tp://example.com/path/html",
    CURLU_NON_SUPPORT_SCHEME, 0, CURLUE_OK},
+  {"custom-scheme://host?expected=test-good",
+   "custom-scheme://host/?expected=test-good",
+   CURLU_NON_SUPPORT_SCHEME, 0, CURLUE_OK},
+  {"custom-scheme://?expected=test-bad",
+   "",
+   CURLU_NON_SUPPORT_SCHEME, 0, CURLUE_MALFORMED_INPUT},
+  {"custom-scheme://?expected=test-new-good",
+   "custom-scheme:///?expected=test-new-good",
+   CURLU_NON_SUPPORT_SCHEME | CURLU_NO_AUTHORITY, 0, CURLUE_OK},
+  {"custom-scheme://host?expected=test-still-good",
+   "custom-scheme://host/?expected=test-still-good",
+   CURLU_NON_SUPPORT_SCHEME | CURLU_NO_AUTHORITY, 0, CURLUE_OK},
   {NULL, NULL, 0, 0, 0}
 };
 
@@ -551,6 +583,17 @@ static struct setcase set_parts_list[] = {
    "scheme=ftp,",
    "ftp://example.com:80/";,
    0, 0, CURLUE_OK, CURLUE_OK},
+  {"custom-scheme://host",
+   "host=\"\",",
+   "custom-scheme://host/",
+   CURLU_NON_SUPPORT_SCHEME, CURLU_NON_SUPPORT_SCHEME, CURLUE_OK,
+   CURLUE_MALFORMED_INPUT},
+  {"custom-scheme://host",
+   "host=\"\",",
+   "custom-scheme:///",
+   CURLU_NON_SUPPORT_SCHEME, CURLU_NON_SUPPORT_SCHEME | CURLU_NO_AUTHORITY,
+   CURLUE_OK, CURLUE_OK},
+
   {NULL, NULL, NULL, 0, 0, 0, 0}
 };
 
diff --git a/tests/libtest/lib1906.c b/tests/libtest/lib1907.c
similarity index 64%
copy from tests/libtest/lib1906.c
copy to tests/libtest/lib1907.c
index 6c7a4bf6e..2d9465aee 100644
--- a/tests/libtest/lib1906.c
+++ b/tests/libtest/lib1907.c
@@ -28,17 +28,15 @@
 int test(char *URL)
 {
   char *url_after;
-  CURLU *curlu = curl_url();
-  CURL *curl = curl_easy_init();
+  CURL *curl;
   CURLcode curl_code;
   char error_buffer[CURL_ERROR_SIZE] = "";
 
-  curl_url_set(curlu, CURLUPART_URL, URL, CURLU_DEFAULT_SCHEME);
-  curl_easy_setopt(curl, CURLOPT_CURLU, curlu);
+  curl_global_init(CURL_GLOBAL_DEFAULT);
+  curl = curl_easy_init();
+  curl_easy_setopt(curl, CURLOPT_URL, URL);
   curl_easy_setopt(curl, CURLOPT_ERRORBUFFER, error_buffer);
   curl_easy_setopt(curl, CURLOPT_VERBOSE, 1L);
-  /* set a port number that makes this reqeuest fail */
-  curl_easy_setopt(curl, CURLOPT_PORT, 1L);
   curl_code = curl_easy_perform(curl);
   if(!curl_code)
     fprintf(stderr, "failure expected, "
@@ -46,26 +44,10 @@ int test(char *URL)
             (long) curl_code, curl_easy_strerror(curl_code), error_buffer);
 
   /* print the used url */
-  curl_url_get(curlu, CURLUPART_URL, &url_after, 0);
-  fprintf(stderr, "curlu now: <%s>\n", url_after);
-  curl_free(url_after);
-
-  /* now reset CURLOP_PORT to go back to originally set port number */
-  curl_easy_setopt(curl, CURLOPT_PORT, 0L);
-
-  curl_code = curl_easy_perform(curl);
-  if(curl_code)
-    fprintf(stderr, "success expected, "
-            "curl_easy_perform returned %ld: <%s>, <%s>\n",
-            (long) curl_code, curl_easy_strerror(curl_code), error_buffer);
-
-  /* print url */
-  curl_url_get(curlu, CURLUPART_URL, &url_after, 0);
-  fprintf(stderr, "curlu now: <%s>\n", url_after);
-  curl_free(url_after);
+  if(!curl_easy_getinfo(curl, CURLINFO_EFFECTIVE_URL, &url_after))
+    printf("Effective URL: %s\n", url_after);
 
   curl_easy_cleanup(curl);
-  curl_url_cleanup(curlu);
   curl_global_cleanup();
 
   return 0;
diff --git a/tests/libtest/lib506.c b/tests/libtest/lib506.c
index 9f656e032..e0325ee00 100644
--- a/tests/libtest/lib506.c
+++ b/tests/libtest/lib506.c
@@ -5,7 +5,7 @@
  *                            | (__| |_| |  _ <| |___
  *                             \___|\___/|_| \_\_____|
  *
- * Copyright (C) 1998 - 2017, Daniel Stenberg, <address@hidden>, et al.
+ * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
  *
  * This software is licensed as described in the file COPYING, which
  * you should have received as part of this distribution. The terms
@@ -142,6 +142,7 @@ static void *fire(void *ptr)
   curl_easy_setopt(curl, CURLOPT_VERBOSE,    1L);
   curl_easy_setopt(curl, CURLOPT_HTTPHEADER, headers);
   curl_easy_setopt(curl, CURLOPT_URL,        tdata->url);
+  curl_easy_setopt(curl, CURLOPT_COOKIEFILE, "");
   printf("CURLOPT_SHARE\n");
   curl_easy_setopt(curl, CURLOPT_SHARE, tdata->share);
 
diff --git a/tests/libtest/lib509.c b/tests/libtest/lib509.c
index 755208b8d..e8e803ffc 100644
--- a/tests/libtest/lib509.c
+++ b/tests/libtest/lib509.c
@@ -5,7 +5,7 @@
  *                            | (__| |_| |  _ <| |___
  *                             \___|\___/|_| \_\_____|
  *
- * Copyright (C) 1998 - 2017, Daniel Stenberg, <address@hidden>, et al.
+ * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
  *
  * This software is licensed as described in the file COPYING, which
  * you should have received as part of this distribution. The terms
diff --git a/tests/libtest/lib541.c b/tests/libtest/lib541.c
index 2861bfcc1..bcbaa481c 100644
--- a/tests/libtest/lib541.c
+++ b/tests/libtest/lib541.c
@@ -5,7 +5,7 @@
  *                            | (__| |_| |  _ <| |___
  *                             \___|\___/|_| \_\_____|
  *
- * Copyright (C) 1998 - 2016, Daniel Stenberg, <address@hidden>, et al.
+ * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
  *
  * This software is licensed as described in the file COPYING, which
  * you should have received as part of this distribution. The terms
diff --git a/tests/libtest/lib557.c b/tests/libtest/lib557.c
index 485ac8b9a..2e51b99c1 100644
--- a/tests/libtest/lib557.c
+++ b/tests/libtest/lib557.c
@@ -5,7 +5,7 @@
  *                            | (__| |_| |  _ <| |___
  *                             \___|\___/|_| \_\_____|
  *
- * Copyright (C) 1998 - 2017, Daniel Stenberg, <address@hidden>, et al.
+ * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
  *
  * This software is licensed as described in the file COPYING, which
  * you should have received as part of this distribution. The terms
diff --git a/tests/libtest/lib569.c b/tests/libtest/lib569.c
index 3ddc10c4f..80116dad3 100644
--- a/tests/libtest/lib569.c
+++ b/tests/libtest/lib569.c
@@ -5,7 +5,7 @@
  *                            | (__| |_| |  _ <| |___
  *                             \___|\___/|_| \_\_____|
  *
- * Copyright (C) 1998 - 2017, Daniel Stenberg, <address@hidden>, et al.
+ * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
  *
  * This software is licensed as described in the file COPYING, which
  * you should have received as part of this distribution. The terms
diff --git a/tests/libtest/lib571.c b/tests/libtest/lib571.c
index f015f6bb2..002617878 100644
--- a/tests/libtest/lib571.c
+++ b/tests/libtest/lib571.c
@@ -5,7 +5,7 @@
  *                            | (__| |_| |  _ <| |___
  *                             \___|\___/|_| \_\_____|
  *
- * Copyright (C) 1998 - 2017, Daniel Stenberg, <address@hidden>, et al.
+ * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
  *
  * This software is licensed as described in the file COPYING, which
  * you should have received as part of this distribution. The terms
diff --git a/tests/libtest/lib661.c b/tests/libtest/lib661.c
new file mode 100644
index 000000000..a4f2c8e5c
--- /dev/null
+++ b/tests/libtest/lib661.c
@@ -0,0 +1,150 @@
+/***************************************************************************
+ *                                  _   _ ____  _
+ *  Project                     ___| | | |  _ \| |
+ *                             / __| | | | |_) | |
+ *                            | (__| |_| |  _ <| |___
+ *                             \___|\___/|_| \_\_____|
+ *
+ * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
+ *
+ * This software is licensed as described in the file COPYING, which
+ * you should have received as part of this distribution. The terms
+ * are also available at https://curl.haxx.se/docs/copyright.html.
+ *
+ * You may opt to use, copy, modify, merge, publish, distribute and/or sell
+ * copies of the Software, and permit persons to whom the Software is
+ * furnished to do so, under the terms of the COPYING file.
+ *
+ * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY
+ * KIND, either express or implied.
+ *
+ ***************************************************************************/
+#include "test.h"
+#include "memdebug.h"
+
+int test(char *URL)
+{
+   CURLcode res;
+   CURL *curl = NULL;
+   char *newURL = NULL;
+   struct curl_slist *slist = NULL;
+
+   if(curl_global_init(CURL_GLOBAL_ALL) != CURLE_OK) {
+     fprintf(stderr, "curl_global_init() failed\n");
+     return TEST_ERR_MAJOR_BAD;
+   }
+
+   curl = curl_easy_init();
+   if(!curl) {
+     fprintf(stderr, "curl_easy_init() failed\n");
+     res = TEST_ERR_MAJOR_BAD;
+     goto test_cleanup;
+   }
+
+   /* test: CURLFTPMETHOD_SINGLECWD with absolute path should
+            skip CWD to entry path */
+   newURL = aprintf("%s/folderA/661", URL);
+   test_setopt(curl, CURLOPT_URL, newURL);
+   test_setopt(curl, CURLOPT_VERBOSE, 1L);
+   test_setopt(curl, CURLOPT_IGNORE_CONTENT_LENGTH, 1L);
+   test_setopt(curl, CURLOPT_FTP_FILEMETHOD, (long) CURLFTPMETHOD_SINGLECWD);
+   res = curl_easy_perform(curl);
+
+   free(newURL);
+   newURL = aprintf("%s/folderB/661", URL);
+   test_setopt(curl, CURLOPT_URL, newURL);
+   res = curl_easy_perform(curl);
+
+   /* test: CURLFTPMETHOD_NOCWD with absolute path should
+      never emit CWD (for both new and reused easy handle) */
+   curl_easy_cleanup(curl);
+   curl = curl_easy_init();
+   if(!curl) {
+     fprintf(stderr, "curl_easy_init() failed\n");
+     res = TEST_ERR_MAJOR_BAD;
+     goto test_cleanup;
+   }
+
+   free(newURL);
+   newURL = aprintf("%s/folderA/661", URL);
+   test_setopt(curl, CURLOPT_URL, newURL);
+   test_setopt(curl, CURLOPT_VERBOSE, 1L);
+   test_setopt(curl, CURLOPT_IGNORE_CONTENT_LENGTH, 1L);
+   test_setopt(curl, CURLOPT_FTP_FILEMETHOD, (long) CURLFTPMETHOD_NOCWD);
+   res = curl_easy_perform(curl);
+
+   /* curve ball: CWD /folderB before reusing connection with _NOCWD */
+   free(newURL);
+   newURL = aprintf("%s/folderB/661", URL);
+   test_setopt(curl, CURLOPT_URL, newURL);
+   test_setopt(curl, CURLOPT_FTP_FILEMETHOD, (long) CURLFTPMETHOD_SINGLECWD);
+   res = curl_easy_perform(curl);
+
+   free(newURL);
+   newURL = aprintf("%s/folderA/661", URL);
+   test_setopt(curl, CURLOPT_URL, newURL);
+   test_setopt(curl, CURLOPT_FTP_FILEMETHOD, (long) CURLFTPMETHOD_NOCWD);
+   res = curl_easy_perform(curl);
+
+   /* test: CURLFTPMETHOD_NOCWD with home-relative path should
+      not emit CWD for first FTP access after login */
+   curl_easy_cleanup(curl);
+   curl = curl_easy_init();
+   if(!curl) {
+     fprintf(stderr, "curl_easy_init() failed\n");
+     res = TEST_ERR_MAJOR_BAD;
+     goto test_cleanup;
+   }
+
+   slist = curl_slist_append(NULL, "SYST");
+   if(slist == NULL) {
+     fprintf(stderr, "curl_slist_append() failed\n");
+     res = TEST_ERR_MAJOR_BAD;
+     goto test_cleanup;
+   }
+
+   test_setopt(curl, CURLOPT_URL, URL);
+   test_setopt(curl, CURLOPT_VERBOSE, 1L);
+   test_setopt(curl, CURLOPT_NOBODY, 1L);
+   test_setopt(curl, CURLOPT_FTP_FILEMETHOD, (long) CURLFTPMETHOD_NOCWD);
+   test_setopt(curl, CURLOPT_QUOTE, slist);
+   res = curl_easy_perform(curl);
+
+   /* test: CURLFTPMETHOD_SINGLECWD with home-relative path should
+      not emit CWD for first FTP access after login */
+   curl_easy_cleanup(curl);
+   curl = curl_easy_init();
+   if(!curl) {
+     fprintf(stderr, "curl_easy_init() failed\n");
+     res = TEST_ERR_MAJOR_BAD;
+     goto test_cleanup;
+   }
+
+   test_setopt(curl, CURLOPT_URL, URL);
+   test_setopt(curl, CURLOPT_VERBOSE, 1L);
+   test_setopt(curl, CURLOPT_NOBODY, 1L);
+   test_setopt(curl, CURLOPT_FTP_FILEMETHOD, (long) CURLFTPMETHOD_SINGLECWD);
+   test_setopt(curl, CURLOPT_QUOTE, slist);
+   res = curl_easy_perform(curl);
+
+   /* test: CURLFTPMETHOD_NOCWD with home-relative path should
+      not emit CWD for second FTP access when not needed +
+      bonus: see if path buffering survives curl_easy_reset() */
+   curl_easy_reset(curl);
+   test_setopt(curl, CURLOPT_URL, URL);
+   test_setopt(curl, CURLOPT_VERBOSE, 1L);
+   test_setopt(curl, CURLOPT_NOBODY, 1L);
+   test_setopt(curl, CURLOPT_FTP_FILEMETHOD, (long) CURLFTPMETHOD_NOCWD);
+   test_setopt(curl, CURLOPT_QUOTE, slist);
+   res = curl_easy_perform(curl);
+
+
+test_cleanup:
+
+   curl_slist_free_all(slist);
+   free(newURL);
+   curl_easy_cleanup(curl);
+   curl_global_cleanup();
+
+   return (int)res;
+}
diff --git a/tests/manpage-scan.pl b/tests/manpage-scan.pl
index 62eaebea1..3384eec25 100755
--- a/tests/manpage-scan.pl
+++ b/tests/manpage-scan.pl
@@ -6,7 +6,7 @@
 #                            | (__| |_| |  _ <| |___
 #                             \___|\___/|_| \_\_____|
 #
-# Copyright (C) 2016, 2017, Daniel Stenberg, <address@hidden>, et al.
+# Copyright (C) 2016 - 2019, Daniel Stenberg, <address@hidden>, et al.
 #
 # This software is licensed as described in the file COPYING, which
 # you should have received as part of this distribution. The terms
@@ -138,6 +138,7 @@ my %opts = (
     '-N, --no-buffer' => 1,
     '--no-sessionid' => 1,
     '--no-keepalive' => 1,
+    '--no-progress-meter' => 1,
 
     # pretend these options without -no exist in curl.1 and tool_help.c
     '--alpn' => 6,
@@ -147,6 +148,7 @@ my %opts = (
     '--keepalive' => 6,
     '-N, --buffer' => 6,
     '--sessionid' => 6,
+    '--progress-meter' => 6,
 
     # deprecated options do not need to be in tool_help.c nor curl.1
     '--krb4' => 6,
diff --git a/tests/runtests.pl b/tests/runtests.pl
index 0bb9605ac..5ab234c0d 100755
--- a/tests/runtests.pl
+++ b/tests/runtests.pl
@@ -2678,6 +2678,7 @@ sub checksystem {
                 # This is a Windows MinGW build or native build, we need to use
                 # Win32-style path.
                 $pwd = pathhelp::sys_native_current_path();
+                $has_textaware = 1;
             }
            if ($libcurl =~ /(winssl|schannel)/i) {
                $has_winssl=1;
@@ -3024,7 +3025,6 @@ sub checksystem {
             }
         }
     }
-    $has_textaware = ($^O eq 'MSWin32') || ($^O eq 'msys');
 
     logmsg "***************************************** \n";
 
diff --git a/tests/server/.gitignore b/tests/server/.gitignore
index d410f5ea4..94329f7da 100644
--- a/tests/server/.gitignore
+++ b/tests/server/.gitignore
@@ -6,3 +6,4 @@ sockfilt
 sws
 tftpd
 socksd
+disabled
diff --git a/tests/server/util.c b/tests/server/util.c
index b06133802..cc53d3bf4 100644
--- a/tests/server/util.c
+++ b/tests/server/util.c
@@ -5,7 +5,7 @@
  *                            | (__| |_| |  _ <| |___
  *                             \___|\___/|_| \_\_____|
  *
- * Copyright (C) 1998 - 2018, Daniel Stenberg, <address@hidden>, et al.
+ * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
  *
  * This software is licensed as described in the file COPYING, which
  * you should have received as part of this distribution. The terms
diff --git a/tests/smbserver.py.in b/tests/smbserver.py.in
index 8a4fba8a0..2677f0c46 100755
--- a/tests/smbserver.py.in
+++ b/tests/smbserver.py.in
@@ -24,11 +24,14 @@
 from __future__ import (absolute_import, division, print_function)
 # unicode_literals)
 import argparse
-import ConfigParser
 import os
 import sys
 import logging
 import tempfile
+try: # Python 3
+    import configparser
+except ImportError: # Python 2
+    import ConfigParser as configparser
 
 # Import our curl test data helper
 import curl_test_data
@@ -58,7 +61,7 @@ def smbserver(options):
             f.write("{0}".format(pid))
 
     # Here we write a mini config for the server
-    smb_config = ConfigParser.ConfigParser()
+    smb_config = configparser.ConfigParser()
     smb_config.add_section("global")
     smb_config.set("global", "server_name", "SERVICE")
     smb_config.set("global", "server_os", "UNIX")
diff --git a/tests/unit/CMakeLists.txt b/tests/unit/CMakeLists.txt
index 134462733..94f2e2b14 100644
--- a/tests/unit/CMakeLists.txt
+++ b/tests/unit/CMakeLists.txt
@@ -22,6 +22,7 @@ set(UT_SRC
 # Broken link on Linux
 #  unit1604.c
   unit1620.c
+  unit1655.c
   )
 
 set(UT_COMMON_FILES ../libtest/first.c ../libtest/test.h curlcheck.h)
diff --git a/tests/unit/Makefile.inc b/tests/unit/Makefile.inc
index 67de815c0..6ad42bd42 100644
--- a/tests/unit/Makefile.inc
+++ b/tests/unit/Makefile.inc
@@ -11,7 +11,7 @@ UNITPROGS = unit1300 unit1301 unit1302 unit1303 unit1304 
unit1305 unit1307 \
  unit1399 \
  unit1600 unit1601 unit1602 unit1603 unit1604 unit1605 unit1606 unit1607 \
  unit1608 unit1609 unit1620 unit1621 \
- unit1650 unit1651 unit1652 unit1653 unit1654
+ unit1650 unit1651 unit1652 unit1653 unit1654 unit1655
 
 unit1300_SOURCES = unit1300.c $(UNITFILES)
 unit1300_CPPFLAGS = $(AM_CPPFLAGS)
@@ -118,3 +118,7 @@ unit1653_CPPFLAGS = $(AM_CPPFLAGS)
 
 unit1654_SOURCES = unit1654.c $(UNITFILES)
 unit1654_CPPFLAGS = $(AM_CPPFLAGS)
+
+unit1655_SOURCES = unit1655.c $(UNITFILES)
+unit1655_CPPFLAGS = $(AM_CPPFLAGS)
+
diff --git a/tests/unit/README b/tests/unit/README
index b8a513b3b..060b670c6 100644
--- a/tests/unit/README
+++ b/tests/unit/README
@@ -35,6 +35,9 @@ We put tests that focus on an area or a specific function 
into a single C
 source file. The source file should be named 'unitNNNN.c' where NNNN is a
 number that starts with 1300 and you can pick the next free number.
 
+Add your test to tests/unit/Makefile.inc (if it is a unit test).
+Add your test data to tests/data/Makefile.inc
+
 You also need a separate file called tests/data/testNNNN (using the same
 number) that describes your test case. See the test1300 file for inspiration
 and the tests/FILEFORMAT documentation.
@@ -46,9 +49,10 @@ For the actual C file, here's a very simple example:
 
 #include "a libcurl header.h" /* from the lib dir */
 
-static void unit_setup( void )
+static CURLcode unit_setup( void )
 {
   /* whatever you want done first */
+  return CURLE_OK;
 }
 
 static void unit_stop( void )
diff --git a/tests/unit/unit1303.c b/tests/unit/unit1303.c
index b065683a6..945b82ba7 100644
--- a/tests/unit/unit1303.c
+++ b/tests/unit/unit1303.c
@@ -5,7 +5,7 @@
  *                            | (__| |_| |  _ <| |___
  *                             \___|\___/|_| \_\_____|
  *
- * Copyright (C) 1998 - 2017, Daniel Stenberg, <address@hidden>, et al.
+ * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
  *
  * This software is licensed as described in the file COPYING, which
  * you should have received as part of this distribution. The terms
@@ -68,7 +68,7 @@ struct timetest {
   int timeout_ms;
   int connecttimeout_ms;
   bool connecting;
-  time_t result;
+  timediff_t result;
   const char *comment;
 };
 
@@ -138,7 +138,7 @@ UNITTEST_START
   data->progress.t_startop.tv_usec = 0;
 
   for(i = 0; i < sizeof(run)/sizeof(run[0]); i++) {
-    time_t timeout;
+    timediff_t timeout;
     NOW(run[i].now_s, run[i].now_us);
     TIMEOUTS(run[i].timeout_ms, run[i].connecttimeout_ms);
     timeout =  Curl_timeleft(data, &now, run[i].connecting);
diff --git a/tests/unit/unit1307.c b/tests/unit/unit1307.c
index 91e4606b7..7e88ea4d9 100644
--- a/tests/unit/unit1307.c
+++ b/tests/unit/unit1307.c
@@ -5,7 +5,7 @@
  *                            | (__| |_| |  _ <| |___
  *                             \___|\___/|_| \_\_____|
  *
- * Copyright (C) 1998 - 2018, Daniel Stenberg, <address@hidden>, et al.
+ * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
  *
  * This software is licensed as described in the file COPYING, which
  * you should have received as part of this distribution. The terms
diff --git a/tests/unit/unit1399.c b/tests/unit/unit1399.c
index 7383fbd86..3b52989e4 100644
--- a/tests/unit/unit1399.c
+++ b/tests/unit/unit1399.c
@@ -5,7 +5,7 @@
  *                            | (__| |_| |  _ <| |___
  *                             \___|\___/|_| \_\_____|
  *
- * Copyright (C) 1998 - 2018, Daniel Stenberg, <address@hidden>, et al.
+ * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
  *
  * This software is licensed as described in the file COPYING, which
  * you should have received as part of this distribution. The terms
@@ -50,7 +50,7 @@ static void fake_t_startsingle_time(struct Curl_easy *data,
   data->progress.t_startsingle.tv_usec = fake_now.tv_usec;
 }
 
-static bool usec_matches_seconds(time_t time_usec, int expected_seconds)
+static bool usec_matches_seconds(timediff_t time_usec, int expected_seconds)
 {
   int time_sec = (int)(time_usec / usec_magnitude);
   bool same = (time_sec == expected_seconds);
diff --git a/tests/unit/unit1620.c b/tests/unit/unit1620.c
index b8b096521..c6aa721cf 100644
--- a/tests/unit/unit1620.c
+++ b/tests/unit/unit1620.c
@@ -5,7 +5,7 @@
  *                            | (__| |_| |  _ <| |___
  *                             \___|\___/|_| \_\_____|
  *
- * Copyright (C) 1998 - 2018, Daniel Stenberg, <address@hidden>, et al.
+ * Copyright (C) 1998 - 2019, Daniel Stenberg, <address@hidden>, et al.
  *
  * This software is licensed as described in the file COPYING, which
  * you should have received as part of this distribution. The terms
@@ -83,7 +83,7 @@ UNITTEST_START
 
   Curl_free_request_state(empty);
 
-  rc = Curl_close(empty);
+  rc = Curl_close(&empty);
   fail_unless(rc == CURLE_OK, "Curl_close() failed");
 
 }
diff --git a/tests/unit/unit1654.c b/tests/unit/unit1654.c
index 51fc5d16f..a800d9c3a 100644
--- a/tests/unit/unit1654.c
+++ b/tests/unit/unit1654.c
@@ -97,6 +97,15 @@ UNITTEST_START
   }
   fail_unless(asi->num == 9, "wrong number of entries");
 
+  /* quoted 'ma' value */
+  result = Curl_altsvc_parse(curl, asi, "h2=\"example.net:443\"; ma=\"180\";",
+                             ALPN_h2, "example.net", 80);
+  if(result) {
+    fprintf(stderr, "Curl_altsvc_parse(4) failed!\n");
+    unitfail++;
+  }
+  fail_unless(asi->num == 10, "wrong number of entries");
+
   result = Curl_altsvc_parse(curl, asi,
                              "h2=\":443\", h3=\":443\"; ma = 120; persist = 1",
                              ALPN_h1, "curl.haxx.se", 80);
@@ -104,7 +113,7 @@ UNITTEST_START
     fprintf(stderr, "Curl_altsvc_parse(5) failed!\n");
     unitfail++;
   }
-  fail_unless(asi->num == 11, "wrong number of entries");
+  fail_unless(asi->num == 12, "wrong number of entries");
 
   /* clear that one again and decrease the counter */
   result = Curl_altsvc_parse(curl, asi, "clear;",
@@ -113,7 +122,7 @@ UNITTEST_START
     fprintf(stderr, "Curl_altsvc_parse(6) failed!\n");
     unitfail++;
   }
-  fail_unless(asi->num == 9, "wrong number of entries");
+  fail_unless(asi->num == 10, "wrong number of entries");
 
   Curl_altsvc_save(asi, outname);
 
diff --git a/tests/unit/unit1655.c b/tests/unit/unit1655.c
new file mode 100644
index 000000000..7fea134d5
--- /dev/null
+++ b/tests/unit/unit1655.c
@@ -0,0 +1,113 @@
+/***************************************************************************
+ *                                  _   _ ____  _
+ *  Project                     ___| | | |  _ \| |
+ *                             / __| | | | |_) | |
+ *                            | (__| |_| |  _ <| |___
+ *                             \___|\___/|_| \_\_____|
+ *
+ * Copyright (C) 2019, Daniel Stenberg, <address@hidden>, et al.
+ *
+ * This software is licensed as described in the file COPYING, which
+ * you should have received as part of this distribution. The terms
+ * are also available at https://curl.haxx.se/docs/copyright.html.
+ *
+ * You may opt to use, copy, modify, merge, publish, distribute and/or sell
+ * copies of the Software, and permit persons to whom the Software is
+ * furnished to do so, under the terms of the COPYING file.
+ *
+ * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY
+ * KIND, either express or implied.
+ *
+ ***************************************************************************/
+#include "curlcheck.h"
+
+#include "doh.h" /* from the lib dir */
+
+static CURLcode unit_setup(void)
+{
+  /* whatever you want done first */
+  return CURLE_OK;
+}
+
+static void unit_stop(void)
+{
+    /* done before shutting down and exiting */
+}
+
+UNITTEST_START
+
+/* introduce a scope and prove the corner case with write overflow,
+ * so we can prove this test would detect it and that it is properly fixed
+ */
+do {
+  const char *bad = "this.is.a.hostname.where.each.individual.part.is.within."
+    "the.sixtythree.character.limit.but.still.long.enough.to."
+    "trigger.the.the.buffer.overflow......it.is.chosen.to.be."
+    "of.a.length.such.that.it.causes.a.two.byte.buffer......."
+    "overwrite.....making.it.longer.causes.doh.encode.to....."
+    ".return.early.so.dont.change.its.length.xxxx.xxxxxxxxxxx"
+    "..xxxxxx.....xx..........xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
+    "xxxxxxxxxxxxxxxxxxxxxxxxxx.xxxxxxxxxxxxxxxx..x......xxxx"
+    "xxxx..xxxxxxxxxxxxxxxxxxx.x...xxxx.x.x.x...xxxxx";
+
+  /* plays the role of struct dnsprobe in urldata.h */
+  struct demo {
+    unsigned char dohbuffer[512];
+    unsigned char canary1;
+    unsigned char canary2;
+    unsigned char canary3;
+  };
+
+  size_t olen = 100000;
+  struct demo victim;
+  DOHcode d;
+  victim.canary1 = 87; /* magic numbers, arbritrarily picked */
+  victim.canary2 = 35;
+  victim.canary3 = 41;
+  d = doh_encode(bad, DNS_TYPE_A, victim.dohbuffer,
+                 sizeof(victim.dohbuffer), &olen);
+  fail_unless(victim.canary1 == 87, "one byte buffer overwrite has happened");
+  fail_unless(victim.canary2 == 35, "two byte buffer overwrite has happened");
+  fail_unless(victim.canary3 == 41,
+              "three byte buffer overwrite has happened");
+  if(d == DOH_OK) {
+    fail_unless(olen <= sizeof(victim.dohbuffer), "wrote outside bounds");
+    fail_unless(olen > strlen(bad), "unrealistic low size");
+  }
+} while(0);
+
+/* run normal cases and try to trigger buffer length related errors */
+do {
+  DNStype dnstype = DNS_TYPE_A;
+  unsigned char buffer[128];
+  const size_t buflen = sizeof(buffer);
+  const size_t magic1 = 9765;
+  size_t olen1 = magic1;
+  const char *sunshine1 = "a.com";
+  const char *sunshine2 = "aa.com";
+  size_t olen2;
+  DOHcode ret2;
+  size_t olen;
+
+  DOHcode ret = doh_encode(sunshine1, dnstype, buffer, buflen, &olen1);
+  fail_unless(ret == DOH_OK, "sunshine case 1 should pass fine");
+  fail_if(olen1 == magic1, "olen has not been assigned properly");
+  fail_unless(olen1 > strlen(sunshine1), "bad out length");
+
+  /* add one letter, the response should be one longer */
+  olen2 = magic1;
+  ret2 = doh_encode(sunshine2, dnstype, buffer, buflen, &olen2);
+  fail_unless(ret2 == DOH_OK, "sunshine case 2 should pass fine");
+  fail_if(olen2 == magic1, "olen has not been assigned properly");
+  fail_unless(olen1 + 1 == olen2, "olen should grow with the hostname");
+
+  /* pass a short buffer, should fail */
+  ret = doh_encode(sunshine1, dnstype, buffer, olen1 - 1, &olen);
+  fail_if(ret == DOH_OK, "short buffer should have been noticed");
+
+  /* pass a minimum buffer, should succeed */
+  ret = doh_encode(sunshine1, dnstype, buffer, olen1, &olen);
+  fail_unless(ret == DOH_OK, "minimal length buffer should be long enough");
+  fail_unless(olen == olen1, "bad buffer length");
+} while(0);
+UNITTEST_STOP
diff --git a/winbuild/Makefile.vc b/winbuild/Makefile.vc
index 9b3b35513..7ad49f09f 100644
--- a/winbuild/Makefile.vc
+++ b/winbuild/Makefile.vc
@@ -59,6 +59,7 @@ CFGSET=true
 !MESSAGE   ENABLE_WINSSL=<yes or no>      - Enable native Windows SSL support, 
defaults to yes
 !MESSAGE   ENABLE_OPENSSL_AUTO_LOAD_CONFIG=<yes or no>
 !MESSAGE                                  - Whether the OpenSSL configuration 
will be loaded automatically, defaults to yes
+!MESSAGE   ENABLE_UNICODE=<yes or no>     - Enable UNICODE support, defaults 
to no
 !MESSAGE   GEN_PDB=<yes or no>            - Generate Program Database (debug 
symbols for release build)
 !MESSAGE   DEBUG=<yes or no>              - Debug builds
 !MESSAGE   MACHINE=<x86 or x64>           - Target architecture (default x64 
on AMD64, x86 on others)
@@ -146,6 +147,14 @@ ENABLE_OPENSSL_AUTO_LOAD_CONFIG = true
 ENABLE_OPENSSL_AUTO_LOAD_CONFIG = false
 !ENDIF
 
+!IFNDEF ENABLE_UNICODE
+USE_UNICODE = false
+!ELSEIF "$(ENABLE_UNICODE)"=="yes"
+USE_UNICODE = true
+!ELSEIF "$(ENABLE_UNICODE)"=="no"
+USE_UNICODE = false
+!ENDIF
+
 CONFIG_NAME_LIB = libcurl
 
 !IF "$(WITH_SSL)"=="dll"
@@ -277,6 +286,7 @@ $(MODE):
        @SET USE_IPV6=$(USE_IPV6)
        @SET USE_SSPI=$(USE_SSPI)
        @SET USE_WINSSL=$(USE_WINSSL)
+       @SET USE_UNICODE=$(USE_UNICODE)
 # compatibility bit
        @SET WITH_NGHTTP2=$(WITH_NGHTTP2)
 
diff --git a/winbuild/MakefileBuild.vc b/winbuild/MakefileBuild.vc
index b5742e109..8267250c2 100644
--- a/winbuild/MakefileBuild.vc
+++ b/winbuild/MakefileBuild.vc
@@ -393,11 +393,11 @@ CFGSET = true
 !IF "$(DEBUG)"=="yes"
 RC_FLAGS = /dDEBUGBUILD=1 /Fo $@ $(LIBCURL_SRC_DIR)\libcurl.rc
 CURL_CC       = $(CC_DEBUG) $(RTLIB_DEBUG)
-CURL_RC_FLAGS = /i../include /dDEBUGBUILD=1 /Fo $@ $(CURL_SRC_DIR)\curl.rc
+CURL_RC_FLAGS = $(CURL_RC_FLAGS) /i../include /dDEBUGBUILD=1 /Fo $@ 
$(CURL_SRC_DIR)\curl.rc
 !ELSE
 RC_FLAGS = /dDEBUGBUILD=0 /Fo $@ $(LIBCURL_SRC_DIR)\libcurl.rc
 CURL_CC       = $(CC_NODEBUG) $(RTLIB)
-CURL_RC_FLAGS = /i../include /dDEBUGBUILD=0 /Fo $@ $(CURL_SRC_DIR)\curl.rc
+CURL_RC_FLAGS = $(CURL_RC_FLAGS) /i../include /dDEBUGBUILD=0 /Fo $@ 
$(CURL_SRC_DIR)\curl.rc
 !ENDIF
 
 !IF "$(AS_DLL)" == "true"
@@ -485,14 +485,18 @@ LFLAGS = $(LFLAGS) $(LFLAGS_PDB)
 CFLAGS = $(CFLAGS) /DCURL_WITH_MULTI_SSL
 !ENDIF
 
+!IF "$(USE_UNICODE)"=="true"
+CFLAGS = $(CFLAGS) /DUNICODE /D_UNICODE
+!ENDIF
+
 LIB_DIROBJ = ..\builds\$(CONFIG_NAME_LIB)-obj-lib
-CURL_DIROBJ = ..\builds\$(CONFIG_NAME_LIB)-obj-curl
-
-!IFDEF WITH_PREFIX
-DIRDIST = $(WITH_PREFIX)
-!ELSE
-DIRDIST = ..\builds\$(CONFIG_NAME_LIB)\
-!ENDIF
+CURL_DIROBJ = ..\builds\$(CONFIG_NAME_LIB)-obj-curl
+
+!IFDEF WITH_PREFIX
+DIRDIST = $(WITH_PREFIX)
+!ELSE
+DIRDIST = ..\builds\$(CONFIG_NAME_LIB)\
+!ENDIF
 
 #
 # curl.exe
@@ -559,6 +563,7 @@ $(LIB_DIROBJ):
        @if not exist "$(LIB_DIROBJ)" mkdir $(LIB_DIROBJ)
        @if not exist "$(LIB_DIROBJ)\vauth" mkdir $(LIB_DIROBJ)\vauth
        @if not exist "$(LIB_DIROBJ)\vtls" mkdir $(LIB_DIROBJ)\vtls
+       @if not exist "$(LIB_DIROBJ)\vssh" mkdir $(LIB_DIROBJ)\vssh
        @if not exist "$(LIB_DIROBJ)\vquic" mkdir $(LIB_DIROBJ)\vquic
 
 $(CURL_DIROBJ):
@@ -578,6 +583,9 @@ $(CURL_DIROBJ):
 {$(LIBCURL_SRC_DIR)\vtls\}.c{$(LIB_DIROBJ)\vtls\}.obj:
        $(CURL_CC) $(CFLAGS) /Fo"$@"  $<
 
+{$(LIBCURL_SRC_DIR)\vssh\}.c{$(LIB_DIROBJ)\vssh\}.obj:
+       $(CURL_CC) $(CFLAGS) /Fo"$@"  $<
+
 {$(LIBCURL_SRC_DIR)\vquic\}.c{$(LIB_DIROBJ)\vquic\}.obj:
        $(CURL_CC) $(CFLAGS) /Fo"$@"  $<
 

-- 
To stop receiving notification emails like this one, please contact
address@hidden.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]