bug-gnu-emacs
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

bug#19565: Emacs vulnerable to endless-data attack (minor)


From: Stefan Kangas
Subject: bug#19565: Emacs vulnerable to endless-data attack (minor)
Date: Sun, 6 Oct 2019 05:13:27 +0200

Kelly Dean <kelly@prtime.org> writes:

> A few days ago I speculated, but now I confirmed. It's technically considered 
> a vulnerability, but in Emacs's case it's a minor problem; exploiting it 
> would be more a prank than a real attack.
>
> To demo locally for archive metadata:
> echo -en 'HTTP/1.1 200 OK\r\n\r\n' > header
> cat header /dev/urandom | nc -l -p 80
>
> Then in Emacs:
> (setq package-archives '(("foo" . "http://127.0.0.1/";)))
> M-x list-packages
>
> Watch Emacs's memory usage grow and grow...
>
> If you set some arbitrary limit on the size of archive-contents, then
> theoretically you break some legitimate ginormous elpa. And if you're getting
> garbage, you wouldn't know it until you've downloaded more garbage than the
> limit. The right way to fix it is to include the size of archive-contents in
> another file that can legitimately be constrained to a specified small maximum
> size, sign that file, and in the client, abort the archive-contents download 
> if
> you get more data than you're supposed to.
>
> The timestamp file that I proposed for fixing the metadata replay vuln (bug
> #19479) would be a suitable place to record the size; then no additional file
> (and signature) is needed just to solve endless-metadata. For the 
> corresponding
> endless-data vuln for packages instead of metadata, I already put sizes in the
> package records in my patch for the package replay vuln.
>
> Don't forget you need to set a maximum size not only on the timestamp file, 
> but also on the signature file, or they would be vulnerable too. E.g. just 
> hardcode 1kB.

I think this affects more than just package.el.  AFAICT, anywhere we
use the url library, an endless data attack can get Emacs to fill up
all available memory (wasting also bandwidth resources, of course).

Lars, perhaps we could add code to handle this in with-fetched-url?

For example, a new keyword argument :max-size, which would make it
stop after having reached that many bytes.  IMO, it would be even
better if this was set to some arbitrarily chosen high value by
default, like 256 MiB or something, so that this protection is on
unless explicitly turned off with nil.

Best regards,
Stefan Kangas





reply via email to

[Prev in Thread] Current Thread [Next in Thread]