One wrinkle to be aware of: conflux circuits have their own purpose on the client side, and you won't (easily) be able to just use the existing path storage that is being used in tpo/network-health/team#313, because there are two legs.
It is fine to ignore this for now, as we will only be using that data from the congestion control instances. But if onionperf assumes it will always be able to make sense of which relays are in a path inherently, I could see surprises here.
I think this sounds like a reasonable plan. I'd be especially happy to retire the 6-series instances that have generated already a lot of data.
On the other hand I am a bit worried about what the new path on conflux might look like and what onionperf might "think" of that. There might be some dev work to do there but we need to test first.
Tor 0.4.8 is now pre-building 3 conflux sets, and we have close to 60% exits upgraded. So conflux should now be behaving in a steady-state.
The control port purpose field for conflux circuits is "CONFLUX_LINKED". We should make sure that is handled correctly and that the onionperf fetches are using that purpose. With 3 spare sets being maintained, they should always do so.
Yes they should, same as before. They should also wait for the BUILDTIMEOUT_SET event before fetching, as before.
Also should I maintain the tor-0.4.8.0-alpha-dev instances or these can be archived?
I am not sure what these are. We already had 0.4.8.0 running? Were these being graphed on the website?
Finally should both the client and the server have conflux enabled? This matters for over onion measurements basically.
Technically this does not matter since C-Tor onions don't do conflux. In fact, if it is possible to disable onion service measurements just for the cfx instances without everything exploding, maybe that would be best to avoid confusion. But otherwise, just enable it on both sides.
When we have artibench metrics and arti conflux, we will want conflux on both sides for onions.
It is an iptables issue w/ forwarding port 443 to tgen on 8080. All clients are deployed the same but something is happening between aws firewall and the vm itself.
Ok just found and fixed the issue. The prerouting rule we are using specify the network interface, these VMs are using different names depending on location so I had to change that and I now see traffic flowing between port 443 and 8080, reaching the tgen server.