Skip to content

GitLab

  • Projects
  • Groups
  • Snippets
  • Help
    • Loading...
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in
Trac
Trac
  • Project overview
    • Project overview
    • Details
    • Activity
  • Issues 246
    • Issues 246
    • List
    • Boards
    • Labels
    • Service Desk
    • Milestones
  • Operations
    • Operations
    • Metrics
    • Incidents
  • Analytics
    • Analytics
    • Value Stream
  • Wiki
    • Wiki
  • Members
    • Members
  • Collapse sidebar
  • Activity
  • Create a new issue
  • Issue Boards

GitLab is used only for code review, issue tracking and project management. Canonical locations for source code are still https://gitweb.torproject.org/ https://git.torproject.org/ and git-rw.torproject.org.

  • Legacy
  • TracTrac
  • Issues
  • #33974

Closed (moved)
Open
Opened Apr 23, 2020 by Karsten Loesing@karsten

Update OnionPerf to TGen 1.0.0

TGen 1.0.0 comes with a "change in the format of some of the configuration options that breaks compatibility with the previous version 0.0.1."

I tried to update OnionPerf to write out TGen files that TGen 1.0.0 understands. Here's the diff:

diff --git a/onionperf/model.py b/onionperf/model.py
index 3c057c5..90c824e 100644
--- a/onionperf/model.py
+++ b/onionperf/model.py
@@ -77,9 +77,9 @@ class TorperfModel(GeneratableTGenModel):
         if self.socksproxy is not None:
             g.node["start"]["socksproxy"] = self.socksproxy
         g.add_node("pause", time="5 minutes")
-        g.add_node("transfer50k", type="get", protocol="tcp", size="50 KiB", timeout="295 seconds", stallout="300 seconds")
-        g.add_node("transfer1m", type="get", protocol="tcp", size="1 MiB", timeout="1795 seconds", stallout="1800 seconds")
-        g.add_node("transfer5m", type="get", protocol="tcp", size="5 MiB", timeout="3595 seconds", stallout="3600 seconds")
+        g.add_node("stream50k", recvsize="50 KiB", timeout="295 seconds", stallout="300 seconds")
+        g.add_node("stream1m", recvsize="1 MiB", timeout="1795 seconds", stallout="1800 seconds")
+        g.add_node("stream5m", recvsize="5 MiB", timeout="3595 seconds", stallout="3600 seconds")
 
         g.add_edge("start", "pause")
 
@@ -88,9 +88,9 @@ class TorperfModel(GeneratableTGenModel):
         g.add_edge("pause", "pause")
 
         # these are chosen with weighted probability, change edge 'weight' attributes to adjust probability
-        g.add_edge("pause", "transfer50k", weight="12.0")
-        g.add_edge("pause", "transfer1m", weight="2.0")
-        g.add_edge("pause", "transfer5m", weight="1.0")
+        g.add_edge("pause", "stream50k", weight="12.0")
+        g.add_edge("pause", "stream1m", weight="2.0")
+        g.add_edge("pause", "stream5m", weight="1.0")
 
         return g
 
@@ -109,10 +109,10 @@ class OneshotModel(GeneratableTGenModel):
         g.add_node("start", serverport=self.tgen_port, peers=server_str, loglevel="info", heartbeat="1 minute")
         if self.socksproxy is not None:
             g.node["start"]["socksproxy"] = self.socksproxy
-        g.add_node("transfer5m", type="get", protocol="tcp", size="5 MiB", timeout="15 seconds", stallout="10 seconds")
+        g.add_node("stream5m", recvsize="5 MiB", timeout="15 seconds", stallout="10 seconds")
 
-        g.add_edge("start", "transfer5m")
-        g.add_edge("transfer5m", "start")
+        g.add_edge("start", "stream5m")
+        g.add_edge("stream5m", "start")
 
         return g
 

I'll let an OnionPerf instance run for a day to see at the output, also to see if we need to make adjustments to OnionPerf's analyze mode due to slightly changed log messages.

Until then, do these changes above look reasonable? Or did I miss something? Thanks!

To upload designs, you'll need to enable LFS and have admin enable hashed storage. More information
Assignee
Assign to
None
Milestone
None
Assign milestone
Time tracking
None
Due date
None
Reference: legacy/trac#33974