Commit a8b970c7 authored by Donncha O'Cearbhaill's avatar Donncha O'Cearbhaill
Browse files

Inital commit, very buggy!

parents
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
# C extensions
*.so
# Distribution / packaging
.Python
env/
build/
develop-eggs/
dist/
downloads/
eggs/
lib/
lib64/
parts/
sdist/
var/
*.egg-info/
.installed.cfg
*.egg
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.coverage
.cache
nosetests.xml
coverage.xml
# Translations
*.mo
*.pot
# Django stuff:
*.log
# Sphinx documentation
docs/_build/
# PyBuilder
target/
# Config files and keys
*.yaml
*.key
This diff is collapsed.
# OnionBalance
WARNING: THIS IS VERY EXPERIMENTAL, ROUGH CODE. THIS IS NOT READY TO BE USED
FOR PRODUCTION. IT MAY CONTAIN CRITICAL SECURITY OR PERFORMANCE BUGS.
## Overview
The onion service load balancer allows an operator to distribute requests
for their onion service to between 1 and 10 separate Tor instances. Each
Tor instance can run completely independently with no knowledge of any other
instances.
The load balancer is the only system which needs to store the onion services's private key. As the load balancer handles no hidden service traffic it's
risk of deanonymisation by traffic analysis attacks is reduced.
## Installation
### Load Balancing Instances
Each load-balancing instance is an onion service configured with a unique
private key. To minimize the disclosure of information about your onion
service configuration it is advisable to configure some form of onion service
authentication.
The individual load balancing instances can use a standard Tor client.
#### Management Server
##### Generate a onion service key
You can use your existing onion service `private_key` or generate a new
one using OpenSSL.
$ openssl genrsa -out private_key 1024
##### Encrypt an onion service private key
Your master onion service private key can be protected by encrypting it
while it is stored on disk. Due to limitation in the underlying pycrypto
library, only DES-CBC, DES-EDE3-CBC, AES-128-CBC encrypted keys are supported.
$ openssl rsa -des3 -in private_key -out private_key.enc
##### Configure Tor on the management server
For this tool to work you need a version of Tor with the ability to fetch
and upload HS descriptors. Until these features are merged into Tor, you can
use my patched Tor branch.
$ git clone https://github.com/DonnchaC/tor.git
$ git checkout hs-fetch-and-post-cmds
The `docs/torrc` contains a sample Tor config file which is suitable for the
management server.
##### Install the management server
The code for the onion load balancer can be using git.
$ git clone https://github.com/DonnchaC/onion-balance.git
$ cd onion-balance
The management server requires a number of Python dependencies. These can
be install using pip.
$ pip install -r requirements.txt
For onion service descriptor parsing you need a version of stem >= `1.3.0-dev`.
## Configuration
Each load balancing Tor instance is listed by it's unique onion address.
An optional authentication cookie can also be provided if the back-end onion
service is using some form of descriptor encryption.
An example config file is provided in `config.yaml.example`.
## Running
Once your load balancing instances are running, you can run the onion load balancer by starting the management server:
$ python onion-balance/manage.py -f config.yaml
# Onion Load Balancer Config File
# ---
# Each hidden service key line should be followed be followed by a list of 0
# or more instances which contain the onion address of the load balancing
# HS and any authentication data needed to access that HS.
refresh: 600 # How often to poll for updated descriptors
services:
- key: /path/to/private_key # 7s4hxwwifcslrus2.onion
instances:
- address: o6ff73vmigi4oxka # srv1
- address: nkz23ai6qesuwqhc # srv2
auth: dCmx3qIvArbil8A0KM4KgQ== # Hidden service authentication key
- key: /path/to/private_key.enc # dpkdeys3apjtqydk.onion
instances:
- address: htbzowpp5cn7wj2u # srv3
CookieAuthentication 1
FetchDirInfoEarly 1
FetchDirInfoExtraEarly 1
FetchUselessDescriptors 1
# -*- coding: utf-8 -*-
class Config(object):
"""
Config object to store the global state
"""
def __init__(self):
# Configure some defaults
self.config = {
'replicas': 2,
'max_intro_points': 10,
'refresh': 10 * 60
}
# In memory list of hidden services and balancing instances
self.hidden_services = []
cfg = Config()
# -*- coding: utf-8 -*-
import hashlib
import base64
import textwrap
import datetime
import Crypto.Util.number
import stem
import util
import log
logger = log.get_logger()
def generate_hs_descriptor(permanent_key, introduction_point_list=None,
replica=0, timestamp=None):
"""
High-level interface for generating a signed HS descriptor
TODO: Allow generation of descriptors for future timeperiod,
to help clients with a skewed clock
"""
if not timestamp:
timestamp = datetime.datetime.utcnow()
unix_timestamp = int(timestamp.strftime("%s"))
permanent_key_block = make_public_key_block(permanent_key)
permanent_id = util.calc_permanent_id(permanent_key)
# Calculate the current secret-id-part for this hidden service
time_period = util.get_time_period(unix_timestamp, permanent_id)
secret_id_part = util.calc_secret_id_part(time_period, None, replica)
descriptor_id = util.calc_descriptor_id(permanent_id, secret_id_part)
intro_section = make_introduction_points_part(
introduction_point_list
)
unsigned_descriptor = generate_hs_descriptor_raw(
desc_id_base32=util.base32_encode_str(descriptor_id),
permanent_key_block=permanent_key_block,
secret_id_part_base32=util.base32_encode_str(secret_id_part),
publication_time=util.rounded_timestamp(timestamp),
introduction_points_part=intro_section
)
signed_descriptor = sign_descriptor(unsigned_descriptor, permanent_key)
return signed_descriptor
def generate_hs_descriptor_raw(desc_id_base32, permanent_key_block,
secret_id_part_base32, publication_time,
introduction_points_part):
doc = []
doc.append("rendezvous-service-descriptor {}".format(desc_id_base32))
doc.append("version 2")
doc.append("permanent-key")
doc.append(permanent_key_block)
doc.append("secret-id-part {}".format(secret_id_part_base32))
doc.append("publication-time {}".format(publication_time))
doc.append("protocol-versions 2,3")
doc.append("introduction-points")
doc.append(introduction_points_part)
doc.append("signature\n")
unsigned_descriptor = '\n'.join(doc)
return unsigned_descriptor
def make_introduction_points_part(introduction_point_list):
# If not intro points were specified, we should create and empty block
if not introduction_point_list:
introduction_point_list = []
intro = []
for intro_point in introduction_point_list:
intro.append("introduction-point {}".format(intro_point.identifier))
intro.append("ip-address {}".format(intro_point.address))
intro.append("onion-port {}".format(intro_point.port))
intro.append("onion-key")
intro.append(intro_point.onion_key)
intro.append("service-key")
intro.append(intro_point.service_key)
intro_section = '\n'.join(intro).encode('utf-8')
intro_section_base64 = base64.b64encode(intro_section).decode('utf-8')
intro_section_base64 = textwrap.fill(intro_section_base64, 64)
# Add the header and footer:
intro_points_with_headers = '\n'.join([
'-----BEGIN MESSAGE-----',
intro_section_base64,
'-----END MESSAGE-----'])
return intro_points_with_headers
def make_public_key_block(key):
"""
Get ASN.1 representation of public key, base64 and add headers
"""
asn1_pub = util.get_asn1_sequence(key)
pub_base64 = base64.b64encode(asn1_pub).decode('utf-8')
pub_base64 = textwrap.fill(pub_base64, 64)
# Add the header and footer:
pub_with_headers = '\n'.join([
'-----BEGIN RSA PUBLIC KEY-----',
pub_base64,
'-----END RSA PUBLIC KEY-----'])
return pub_with_headers
def sign_digest(digest, private_key):
"""
Sign, base64 encode, wrap and add Tor signature headers
The message digest is PKCS1 padded without the optional algorithmIdentifier
section.
"""
digest = util.add_pkcs1_padding(digest)
(signature_long, ) = private_key.sign(digest, None)
signature_bytes = Crypto.Util.number.long_to_bytes(signature_long, 128)
signature_base64 = base64.b64encode(signature_bytes).decode('utf-8')
signature_base64 = textwrap.fill(signature_base64, 64)
# Add the header and footer:
signature_with_headers = '\n'.join([
'-----BEGIN SIGNATURE-----',
signature_base64,
'-----END SIGNATURE-----'])
return signature_with_headers
def sign_descriptor(descriptor, service_key):
"""'Sign or resign a hidden service descriptor"""
TOKEN_HSDESCRIPTOR_SIGNATURE = '\nsignature\n'
# Remove signature block if it exists
if TOKEN_HSDESCRIPTOR_SIGNATURE in descriptor:
descriptor = descriptor[:descriptor.find(TOKEN_HSDESCRIPTOR_SIGNATURE)
+ len(TOKEN_HSDESCRIPTOR_SIGNATURE)]
else:
descriptor = descriptor.strip() + TOKEN_HSDESCRIPTOR_SIGNATURE
descriptor_digest = hashlib.sha1(descriptor.encode('utf-8')).digest()
signature_with_headers = sign_digest(descriptor_digest, service_key)
return descriptor + signature_with_headers
def fetch_descriptor(controller, onion_address, hsdir=None):
"""
Try fetch a HS descriptor from any of the responsible HSDir's"
TODO: allow a hsdir to be specified
"""
logger.info("Sending HS descriptor fetch for %s.onion" % onion_address)
response = controller.msg("HSFETCH %s" % (onion_address))
(response_code, divider, response_content) = response.content()[0]
if not response.is_ok():
if response_code == "552":
raise stem.InvalidRequest(response_code, response_content)
else:
raise stem.ProtocolError("HSFETCH returned unexpected "
"response code: %s" % response_code)
pass
def upload_descriptor(controller, signed_descriptor, hsdirs=None):
"""
Upload descriptor via the Tor control port
If no HSDir's are specified, Tor will upload to what it thinks are the
responsible directories
"""
logger.debug("Sending HS descriptor upload")
# Provide server fingerprints to control command if HSDirs are specified.
if hsdirs:
server_args = ' '.join([("SERVER=%s" % hsdir) for hsdir in hsdirs])
else:
server_args = ""
response = controller.msg("HSPOST%s\r\n%s\r\n.\r\n" %
(server_args, signed_descriptor))
(response_code, divider, response_content) = response.content()[0]
if not response.is_ok():
if response_code == "552":
raise stem.InvalidRequest(response_code, response_content)
else:
raise stem.ProtocolError("+HSPOST returned unexpected response "
"code: %s" % response_code)
# -*- coding: utf-8 -*-
import stem
import log
import config
logger = log.get_logger()
class EventHandler(object):
"""
Handles asynchronous Tor events.
"""
def __init__(self, controller):
self.controller = controller
def new_desc(self, desc_event):
"""
Parse HS_DESC response events
"""
logger.debug("Received new HS_DESC event: %s" %
str(desc_event))
def new_desc_content(self, desc_content_event):
"""
Parse HS_DESC_CONTENT response events for descriptor content
Update the HS instance object with the data from the new descriptor.
"""
logger.debug("Received new HS_DESC_CONTENT event for %s" %
desc_content_event.address)
# Make sure the descriptor is not empty
descriptor_text = str(desc_content_event.descriptor).encode('utf-8')
if len(descriptor_text) < 5:
logger.debug("Empty descriptor received for %s" %
desc_content_event.address)
return
# Find the HS instance for this descriptor
for service in config.cfg.hidden_services:
for instance in service.instances:
if instance.onion_address == desc_content_event.address:
instance.update_descriptor(descriptor_text)
return
def new_event(self, event):
"""
Dispatches new Tor controller events to the appropriate handlers.
"""
if isinstance(event, stem.response.events.HSDescEvent):
self.new_desc(event)
elif isinstance(event, stem.response.events.HSDescContentEvent):
self.new_desc_content(event)
else:
logger.warning("Received unexpected event %s." % str(event))
# -*- coding: utf-8 -*-
import datetime
import random
import time
import Crypto.PublicKey.RSA
import stem.descriptor.hidden_service_descriptor
import descriptor
import util
import log
import config
logger = log.get_logger()
def fetch_all_descriptors(controller):
"""
Try fetch fresh descriptors for all HS instances
"""
logger.info("Running periodic descriptor fetch")
# Clear Tor descriptor cache before fetches by sending NEWNYM
controller.signal(stem.control.Signal.NEWNYM)
time.sleep(5)
for service in config.cfg.hidden_services:
for instance in service.instances:
instance.fetch_descriptor()
def publish_all_descriptors():
"""
Called periodically to upload new super-descriptors if needed
"""
logger.info("Running periodic descriptor publish")
for service in config.cfg.hidden_services:
service.publish()
class HiddenService(object):
"""
HiddenService represents a front-facing hidden service which should
be load-balanced.
"""
def __init__(self, controller, service_key=None, instances=None):
"""
Initialise a HiddenService object.
"""
self.controller = controller
# Service key must be a valid PyCrypto RSA key object
if not isinstance(service_key, Crypto.PublicKey.RSA._RSAobj):
raise ValueError("Service key is not a valid RSA object")
else:
self.service_key = service_key
# List of load balancing Instances for this hidden service
if not instances:
instances = []
self.instances = instances
self.onion_address = util.calc_onion_address(self.service_key)
self.last_uploaded = None
def _intro_points_modified(self):
"""
Check if the introduction point set has changed since last
publish.
"""
for instance in self.instances:
if instance.changed_since_published:
return True
# No introduction points have changed
return False
def _descriptor_expiring(self):
"""
Check if the last uploaded super descriptor is expiring (> 1
hour old).
"""
if not self.last_uploaded:
# No descriptor uploaded yet, we should publish.
return True
descriptor_age = (datetime.datetime.utcnow() - self.last_uploaded)
if (descriptor_age.total_seconds() > 60 * 60):
return True
return False
def _get_combined_introduction_points(self):
"""
Choose set of introduction points from all fresh descriptors
TODO: There are probably better algorithms for choosing which
introduction points to include. If we have more than
`max_intro_points` introduction points, we will need to exclude
some. It probably sensible to prefer IPs which are new and
haven't been included in any previous descriptors. Clients with
an old descriptor will continue trying previously published IPs
if they still work.
"""
combined_intro_points = []
for instance in self.instances:
if not instance.last_fetched:
logger.debug("No descriptor fetched for instance '%s' yet. "
"Skipping!" % instance.onion_address)
continue
# Check if the intro points are too old
intro_age = datetime.datetime.utcnow() - instance.last_fetched
if intro_age.total_seconds() > 60 * 60:
logger.info("Our introduction points for instance '%s' "
"are too old. Skipping!" % instance.onion_address)
continue
# Our IP's are recent enough, include them
instance.changed_since_published = False
combined_intro_points.extend(instance.introduction_points)
# Choose up to `max_intro_points` IPs from the combined set
max_introduction_points = min(
len(combined_intro_points),
config.cfg.config.get("max_intro_points")
)
choosen_intro_points = random.sample(combined_intro_points,
max_introduction_points)
# Shuffle IP's to try reveal less information about which
# instances are online and have introduction points included.
random.shuffle(choosen_intro_points)
logger.debug("Selected %d IPs (of %d) for service '%s'" %
(len(choosen_intro_points), len(combined_intro_points),
self.onion_address))
return choosen_intro_points
def _get_signed_descriptor(self, replica=0):
"""
Generate a signed HS descriptor for this hidden service
"""
# Select a set of introduction points from this HS's load
# balancing instances.
introduction_points = self._get_combined_introduction_points()
signed_descriptor = descriptor.generate_hs_descriptor(
self.service_key,
introduction_point_list=introduction_points,
replica=replica
)
return signed_descriptor
def _upload_descriptor(self):
"""
Create, sign and upload a super-descriptors for this HS
TODO: If the descriptor ID is changing soon, upload to current
and upcoming set's of HSDirs.
"""
for replica in range(0, config.cfg.config.get("replicas")):
signed_descriptor = self._get_signed_descriptor(replica=replica)
descriptor.upload_descriptor(self.controller, signed_descriptor)
self.last_uploaded = datetime.datetime.utcnow()
def publish(self, force=False):
"""
Publish descriptor if have new IP's or if descriptor has expired
"""
if ( self._intro_points_modified() or
self._descriptor_expiring() or
force):
logger.info("Publishing new descriptor for '%s'" %
self.onion_address)
self._upload_descriptor()
class Instance(object):
"""