/pidgin/main: dd8ee564e065: Improve how our HTTP proxy code read...

Mark Doliner mark at kingant.net
Tue Jan 28 10:38:11 EST 2014


Changeset: dd8ee564e06546fc3c6bb44bd49138d06f413896
Author:	 Mark Doliner <mark at kingant.net>
Date:	 2014-01-12 14:32 -0800
Branch:	 release-2.x.y
URL: https://hg.pidgin.im/pidgin/main/rev/dd8ee564e065

Description:

Improve how our HTTP proxy code reads the content-length header.

It now reads the length into an unsigned variable. This was reported
to us by Matt Jones of Volvent. I think there's no potential for
remote code execution or remote crash--we don't allocate a buffer
for the content or anything. I think it's just ugly code. I didn't
impose a max size for the content-length. I feel like it's ok for the
while loop to loop many times then get EAGAIN and break out.

I'm looking forward to unified http parsing in 3.0.0.

diffstat:

 ChangeLog         |   2 ++
 libpurple/proxy.c |  20 ++++++++++++++++----
 2 files changed, 18 insertions(+), 4 deletions(-)

diffs (54 lines):

diff --git a/ChangeLog b/ChangeLog
--- a/ChangeLog
+++ b/ChangeLog
@@ -9,6 +9,8 @@ version 2.10.8:
 	* Fix buffer overflow when parsing a malformed HTTP response with
 	  chunked Transfer-Encoding. (Discovered by Matt Jones, Volvent)
 	  (CVE-2014-NNNN)
+	* Better handling of HTTP proxy responses with negative Content-Lengths.
+	  (Discovered by Matt Jones, Volvent)
 	* Fix handling of SSL certificates without subjects when using libnss.
 	* Fix handling of SSL certificates with timestamps in the distant future
 	  when using libnss. (#15586)
diff --git a/libpurple/proxy.c b/libpurple/proxy.c
--- a/libpurple/proxy.c
+++ b/libpurple/proxy.c
@@ -985,22 +985,34 @@ http_canread(gpointer data, gint source,
 	p = g_strrstr((const gchar *)connect_data->read_buffer, "Content-Length: ");
 	if (p != NULL) {
 		gchar *tmp;
-		int len = 0;
+		gsize content_len;
 		char tmpc;
+
 		p += strlen("Content-Length: ");
 		tmp = strchr(p, '\r');
 		if(tmp)
 			*tmp = '\0';
-		len = atoi(p);
+		if (sscanf(p, "%" G_GSIZE_FORMAT, &content_len) != 1) {
+			/* Couldn't read content length */
+			purple_debug_error("proxy", "Couldn't read content length value "
+					"from %s\n", p);
+			if(tmp)
+				*tmp = '\r';
+			purple_proxy_connect_data_disconnect_formatted(connect_data,
+					_("Unable to parse response from HTTP proxy: %s"),
+					connect_data->read_buffer);
+			return;
+		}
 		if(tmp)
 			*tmp = '\r';
 
 		/* Compensate for what has already been read */
-		len -= connect_data->read_len - headers_len;
+		content_len -= connect_data->read_len - headers_len;
 		/* I'm assuming that we're doing this to prevent the server from
 		   complaining / breaking since we don't read the whole page */
-		while (len--) {
+		while (content_len--) {
 			/* TODO: deal with EAGAIN (and other errors) better */
+			/* TODO: Reading 1 byte at a time is horrible and stupid. */
 			if (read(connect_data->fd, &tmpc, 1) < 0 && errno != EAGAIN)
 				break;
 		}



More information about the Commits mailing list