The Tor Project issueshttps://gitlab.torproject.org/groups/tpo/-/issues2023-11-20T16:02:58Zhttps://gitlab.torproject.org/tpo/applications/tor-browser/-/issues/41065navigator.storage "best effort" + "persistent" leak partitionSize/totalSpace ...2023-11-20T16:02:58ZThorinnavigator.storage "best effort" + "persistent" leak partitionSize/totalSpace entropyEdit: upstream [1781277](https://bugzilla.mozilla.org/show_bug.cgi?id=1781277)
AFAICT, in FF96 or lower, storage quota was always `2,147,483,648`. Note PB windows and normal windows are always the same. Also note, I'm not interested in ...Edit: upstream [1781277](https://bugzilla.mozilla.org/show_bug.cgi?id=1781277)
AFAICT, in FF96 or lower, storage quota was always `2,147,483,648`. Note PB windows and normal windows are always the same. Also note, I'm not interested in slight variations due to storage usage, we can cancel that noise out.
In FF97 (I'll see if I can find the bugzilla) it seems to have become dependent on disk space (and/or disk size), which adds some entropy. Here are my notes
```
// 2147483648 : FF57-96 Windows (and TB ESRs) / Android / Win 10 VM FF60-96
// FF97+ : note opusforlife, bashonly are user names who submitted data (so I can track it)
10737418240 : numerous windows + linux users with lots of disk size, Android Fabrizio 100gb spare from 128gb
5778733465 : Android10 opusforlife 12gb spare from 64gb (Mull)
5641604300 : Android Fabrizio 49gb spare from 64gb
5512729395 : Android9 Thorin 44gb spare from 64gb
5301081292 : Android bashonly 40gb spare from 64gb
5256596684 : Win 10 VM 33gb spare from 52gb
2934867968 : Debian XCFE 2glops 650gb spare from 1TB
1521166745 : Ubuntu VM Fabrizio with 15GB of storage
1177328025 : Android aleyvo 1.5gb spare from 16gb
// other: who cares if they match
// brave: 2147483648 (same in incognito and Tor window)
// opera: 310418104 normal
// opera: 521917312 private
// chrome: 1200238045593 normal
// chrome: 33076376370 normal android
// chrome: 485041940 incognito
// chrome: 204974075 incognito android
```
So TB102+ will reflect this - so far I have 3 results. You can test [here](https://arkenfox.github.io/TZP/tests/engine.html), just scroll down to the bottom, just above the `ERRORS` section, or alternatively, just run this in the browser console, and then expand the promise
```js
console.log(navigator.storage.estimate())
```
So what would be nice to know is how deep this rabbit hole goes... and to look at the code changes in FF97+, which will probably tell us the answer. Once we know how bad it is, we can propose RFP handle it upstream
@richard `Fingerprinting` label please .. also alone feel free to report their quota + OS + free disk size if applicable
---
EDIT
- FF97 [1735713](https://bugzilla.mozilla.org/show_bug.cgi?id=1735713) Revamp temporary storage limits
- FF97 [1593646](https://bugzilla.mozilla.org/show_bug.cgi?id=1593646) StorageManager.estimate is misleading when...
digging into it now: https://bugzilla.mozilla.org/show_bug.cgi?id=1735713#c0
> Our temporary storage limits are still based on free disk space which is now in a conflict with the storage spec. We should base our temporary storage limits on total dSponsor 131 - Phase 2 - Privacy Browserhttps://gitlab.torproject.org/tpo/applications/tor-browser/-/issues/41060Figuring out how to connect after configuring a bridge is a pain point2023-05-10T21:45:20ZdonutsFiguring out how to connect after configuring a bridge is a pain pointUsability testing of the Connection settings redesign conducted in https://gitlab.torproject.org/tpo/ux/research/-/issues/52 & https://gitlab.torproject.org/tpo/ux/research/-/issues/78 has highlighted a pain point: some participants foun...Usability testing of the Connection settings redesign conducted in https://gitlab.torproject.org/tpo/ux/research/-/issues/52 & https://gitlab.torproject.org/tpo/ux/research/-/issues/78 has highlighted a pain point: some participants found it difficult to figure out the next step after configuring a bridge. Often they seem to pause after clicking the blue `OK` button, presumably believing that this is enough to connect.
At present, these users need to either:
1. Scroll back up to the purple banner at the top of the page, and click `Connect` – or:
2. Return to `about:torconnect` and click `Connect` there.
However neither of these routes are obvious initially.Sponsor 30 - Objective 3.5NahNahhttps://gitlab.torproject.org/tpo/applications/tor-browser-build/-/issues/40565Potential Wayland dependency2022-10-08T02:54:47ZMatthew FinkelPotential Wayland dependencyWe received a report that Tor Browser 11.0 now fails to start on a (Gentoo) Linux machine that does not have Wayland installed. Firefox 91.3.0esr does start.
`libxul.so: undefined symbol: gdk_wayland_display_get_wl_compositor`We received a report that Tor Browser 11.0 now fails to start on a (Gentoo) Linux machine that does not have Wayland installed. Firefox 91.3.0esr does start.
`libxul.so: undefined symbol: gdk_wayland_display_get_wl_compositor`boklmboklmhttps://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40156Multi-Pool Matching Support In Snowflake2023-03-15T06:56:41ZshelikhooMulti-Pool Matching Support In SnowflakeCurrently, there is only one matching pool for all snowflake proxies. The proxies either enter the pool and match with any clients, or if it is deemed ineligible rejected from the pool. This makes it difficult to serve more than one purp...Currently, there is only one matching pool for all snowflake proxies. The proxies either enter the pool and match with any clients, or if it is deemed ineligible rejected from the pool. This makes it difficult to serve more than one purpose as the client sharing a single pool cannot choose which set of servers it would like to relay traffic to.
To add multi-pool matching support into Snowflake:
- Add multi-pool support in the broker
- Add or expression in the domain matcher(Both Standalone and Browser Extension port)
- Add UI changes to the Browser Extension to support selective participationhttps://gitlab.torproject.org/tpo/anti-censorship/rdsys/-/issues/113Long term project: feedback loop to dynamically adjust bridge pool sizes (ins...2022-12-01T08:24:29ZRoger DingledineLong term project: feedback loop to dynamically adjust bridge pool sizes (inspired by proximax)Now that we have bridgestrap working, and we are ramping up our in-country deployments for assessing which bridges are getting blocked, we're back in position to reach another long-wanted milestone for the bridge distribution system. The...Now that we have bridgestrap working, and we are ramping up our in-country deployments for assessing which bridges are getting blocked, we're back in position to reach another long-wanted milestone for the bridge distribution system. The proximax-inspired bridge distribution "meta-strategy" is simple to explain: keep track of which bridge distribution strategies are effective, and automatically send new bridges toward the strategies that are succeeding.
As described in the "five ways to test bridge reachability" blog post:
"We can define the *efficiency* of a bridge address in terms of how many people use it and how long before it gets blocked. So a bridge that gets blocked very quickly scores a low efficiency, a bridge that doesn't get blocked but doesn't see much use scores a medium efficiency, and a popular bridge that doesn't get blocked scores high. We can characterize the efficiency of a distribution channel as a function of the efficiency of the bridges it distributes. The key insight is that we then adapt how many new bridges we give to each distribution channel based on its efficiency. So channels that are working well automatically get more addresses to give out, and channels that aren't working well automatically end up with fewer addresses."
Many more details in that blog post (https://blog.torproject.org/research-problem-five-ways-test-bridge-reachability/) and in the original research paper (https://www.freehaven.net/anonbib/#proximax11).
In terms of roadmap, here are possible steps.
Phase one, getting the building blocks in place:
* [ ] Pick a utility function for how to assess a given bridge. Start with something simple and then iterate. For example, "current number of users times number of days we've been giving it out and it's been unblocked." A later more complex function might be "for each country where it has users, add up the (# of users in that country times the number of days it's been reachable from that country)." A third iteration might be to assign a per-country weight multiplier based on how much blocking the country is experiencing, how much we want to prioritize that country, etc. Another idea is to look at bandwidth use (more is better) in addition to simple user count.
* [ ] Work with the metrics team to make decisions about the data architecture. Calculating bridge scores involves looking at historical info about bridges, and bridge usage, and bridge reachability. Maybe this is all better done inside onionoo -- and then the rest of the metrics pipeline like relay-search is in a better position to use it too.
* [ ] Pick a composite utility function for how to assess a given distribution stategy given scores for each bridge in its pool. Again start simple, for example "add up the utility scores for each bridge in that pool."
* [ ] Take a step back: Examine the scores we have, and see if it matches our intuition. Retune the utility functions. Put up some ongoing graphs on the metrics pages. The per-strategy summary scores are already useful at this point, to let us and others see which strategies are being effective.
Phase two, automating the feedback loop:
* [ ] Research: based on the above experience, invent utility functions which are both effective and more robust to gaming. In particular, consider how to handle attacks where colluding bridges report inflated user counts, to make a given distribution strategy look better than it really is. And make sure our utility functions converge, rather than e.g. letting one strategy's score grow so it gets more bridges so it grows more and pretty soon it's at infinity.
* [ ] Research: decide how to adjust the allocations for a new bridge: given the existing pool sizes, and the current composite utility functions, which pool should a new bridge be given to? I bet complex systems control theory has some suggestions here.
* [ ] Monitor it all to make sure it continues behaving as we expect.
In particular, one of the key features of this approach is that we can easily add new *external* bridge distribution mechanisms, such as this week's new "Salmon lite" from the Russian hackathon: we give them a "seed" pool of initial bridges, and then their approach sinks or swims on its own without any need for manual evaluation or centralized adjustments at the rdsys level.https://gitlab.torproject.org/tpo/community/support/-/issues/40075Figure out how to award custom forum badges automatically2022-07-20T21:45:29ZdonutsFigure out how to award custom forum badges automaticallyI did some digging on how to create triggers for custom forum badges (e.g. to automatically award a Tor Browser Alpha Tester badge when a user posts in the Alpha Feedback category), and it looks like we'd need to:
1. [Enable triggered c...I did some digging on how to create triggers for custom forum badges (e.g. to automatically award a Tor Browser Alpha Tester badge when a user posts in the Alpha Feedback category), and it looks like we'd need to:
1. [Enable triggered custom badge queries](https://meta.discourse.org/t/triggered-custom-badge-queries/19336) for our discourse instance
2. Write an SQL query for each badge we create
3. And choose the trigger for when to award the badge
However enabling this functionality [comes with some warnings from Discourse](https://meta.discourse.org/t/badge-sql-can-no-longer-be-edited-by-default/47894):
> Starting from Discourse 1.6 badge sql can no longer be edited by admins unless explicitly enabled.
>
> This change was made for a couple of reasons
>
> Security: allowing admins to enter SQL directly allows them raw access to the database, generally we are opting that raw access to the database from the web UI is a feature you opt-in for. Even though the queries only return user_ids, an admin attacker can discover any information in the database using badge queries. If column A of table Y has the letter A in it return user_id 1 else 2.
>
> Performance: getting badge SQL “just right” is an art, it is not something that is trivial for admins to do correctly. There is huge amount of risk that people who are not experts can create enormous load on a database by entering bad SQL
The alternative would be to have forum admins [manually grant badges](https://meta.discourse.org/t/grant-a-badge-to-individual-users-manually/29426) instead, which may not be so bad depending on the volume. What do you think @gus?https://gitlab.torproject.org/tpo/core/team/-/issues/26New tor.git Release and Support Policies2022-06-06T17:10:24ZDavid Gouletdgoulet@torproject.orgNew tor.git Release and Support PoliciesNext week, on April 27th 2022, we'll finally release 0.4.7.x first stable version. This means also that we will branch out from `main` and open the merge window for 0.4.8.x series.
I wanted to align this release with this drastic change...Next week, on April 27th 2022, we'll finally release 0.4.7.x first stable version. This means also that we will branch out from `main` and open the merge window for 0.4.8.x series.
I wanted to align this release with this drastic change in our release and support policies:
* Support Policy:
https://gitlab.torproject.org/dgoulet/core-team-wiki/-/blob/release-proposal/NetworkTeam/SupportPolicy.md
* Release Policy: https://gitlab.torproject.org/dgoulet/core-team-wiki/-/blob/release-proposal/NetworkTeam/ReleasePolicy.md
The main rationale behind these changes is that simply put our engineering resources have shifted heavily towards `arti` (yay!!) and so we need to reduce our workload on `tor.git` side. The other reasons, as you'll see below, is the vast importance of a "fast relay upgrade path" for the security and health of the network and thus our users.
There are several noticeable changes from what we do today but two in particular I would like to point out.
1. No More LTS
Apart from being a burden because of backports complexity, they are actually a bit of a problem on the relay side of things. We need an healthy network and that implies, in part, to have up to date relays. For security reasons yes but also to take advantage of the new features or defenses we roll out in the latest stable. We are currently suffering 3 to 5 years upgrade path due to LTS versions that lingers in the so call stable OS distributions.
Relay operators **MUST** stop depending on packages in stable distributions. We have to move our packaging efforts towards `deb.torproject.org`, and more agile packaging alternatives like `snap`. In other words, our latest stable release must be accessible rapidly to most operators. Keeping an LTS series is counter productive to that.
2. Drop the 6 months fixed stable release
As we've seen with 0.4.7.x series, we needed more time to roll out a version that we were satisfied with and high quality. It lead to having a much better and thoroughly tested tor without having an intermediary release with half backed features that for which we then need to maintain for months.
The bottom line here for me currently is **simplification** of our tor.git processes as it is unrealistic at this point to keep promising our current policies that were design years ago with a much larger team working on `tor.git`.David Gouletdgoulet@torproject.orgDavid Gouletdgoulet@torproject.orghttps://gitlab.torproject.org/tpo/network-health/metrics/website/-/issues/40048Find a better way to visualize statistics2022-12-13T20:22:37ZHiroFind a better way to visualize statisticsWhile working on `https://gitlab.torproject.org/tpo/network-health/metrics/website/-/issues/40009` I have noticed that our current graph don't help us understanding trends and seasonality in our series.
To make a test I have run the fol...While working on `https://gitlab.torproject.org/tpo/network-health/metrics/website/-/issues/40009` I have noticed that our current graph don't help us understanding trends and seasonality in our series.
To make a test I have run the following simple decomposition analysis on bridge clients connecting from Russia between february and end of march.
```python
import pandas as pd
df = pd.read_csv('userstats-combined.csv')
```
```python
df.info()
```
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 2457009 entries, 0 to 2457008
Data columns (total 8 columns):
# Column Dtype
--- ------ -----
0 date object
1 node object
2 country object
3 transport object
4 version float64
5 frac int64
6 low int64
7 high int64
dtypes: float64(1), int64(3), object(4)
memory usage: 150.0+ MB
```python
threshold = 100 # Anything that occurs less than this will be removed.
df = df[df.high >= threshold]
df = df[df.country != "??"]
date_th = '2022-02-01'
df = df[df.date >= date_th]
```
```python
df
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>date</th>
<th>node</th>
<th>country</th>
<th>transport</th>
<th>version</th>
<th>frac</th>
<th>low</th>
<th>high</th>
</tr>
</thead>
<tbody>
<tr>
<th>2409766</th>
<td>2022-02-01</td>
<td>bridge</td>
<td>ae</td>
<td>obfs4</td>
<td>NaN</td>
<td>85</td>
<td>201</td>
<td>217</td>
</tr>
<tr>
<th>2409793</th>
<td>2022-02-01</td>
<td>bridge</td>
<td>ar</td>
<td>obfs4</td>
<td>NaN</td>
<td>85</td>
<td>113</td>
<td>124</td>
</tr>
<tr>
<th>2409799</th>
<td>2022-02-01</td>
<td>bridge</td>
<td>at</td>
<td>obfs4</td>
<td>NaN</td>
<td>85</td>
<td>184</td>
<td>199</td>
</tr>
<tr>
<th>2409805</th>
<td>2022-02-01</td>
<td>bridge</td>
<td>au</td>
<td>obfs4</td>
<td>NaN</td>
<td>85</td>
<td>410</td>
<td>438</td>
</tr>
<tr>
<th>2409824</th>
<td>2022-02-01</td>
<td>bridge</td>
<td>bd</td>
<td>obfs4</td>
<td>NaN</td>
<td>85</td>
<td>105</td>
<td>110</td>
</tr>
<tr>
<th>...</th>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<th>2456964</th>
<td>2022-03-30</td>
<td>bridge</td>
<td>us</td>
<td>obfs4</td>
<td>NaN</td>
<td>92</td>
<td>6474</td>
<td>6577</td>
</tr>
<tr>
<th>2456966</th>
<td>2022-03-30</td>
<td>bridge</td>
<td>us</td>
<td>snowflake</td>
<td>NaN</td>
<td>92</td>
<td>681</td>
<td>682</td>
</tr>
<tr>
<th>2456974</th>
<td>2022-03-30</td>
<td>bridge</td>
<td>uz</td>
<td>obfs4</td>
<td>NaN</td>
<td>92</td>
<td>124</td>
<td>130</td>
</tr>
<tr>
<th>2456987</th>
<td>2022-03-30</td>
<td>bridge</td>
<td>vn</td>
<td>obfs4</td>
<td>NaN</td>
<td>92</td>
<td>195</td>
<td>199</td>
</tr>
<tr>
<th>2457000</th>
<td>2022-03-30</td>
<td>bridge</td>
<td>za</td>
<td>obfs4</td>
<td>NaN</td>
<td>92</td>
<td>162</td>
<td>169</td>
</tr>
</tbody>
</table>
<p>3837 rows × 8 columns</p>
</div>
```python
ru_ts = df[df['country']=='ru']
# Extract the names of the numerical columns
transports=['<OR>','obfs4','meek', 'snowflake']
metrics=['frac', 'high']
```
```python
ru_ts
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>date</th>
<th>node</th>
<th>country</th>
<th>transport</th>
<th>version</th>
<th>frac</th>
<th>low</th>
<th>high</th>
</tr>
</thead>
<tbody>
<tr>
<th>2410472</th>
<td>2022-02-01</td>
<td>bridge</td>
<td>ru</td>
<td><OR></td>
<td>NaN</td>
<td>85</td>
<td>1335</td>
<td>1514</td>
</tr>
<tr>
<th>2410473</th>
<td>2022-02-01</td>
<td>bridge</td>
<td>ru</td>
<td>meek</td>
<td>NaN</td>
<td>85</td>
<td>2113</td>
<td>2120</td>
</tr>
<tr>
<th>2410475</th>
<td>2022-02-01</td>
<td>bridge</td>
<td>ru</td>
<td>obfs3</td>
<td>NaN</td>
<td>85</td>
<td>336</td>
<td>351</td>
</tr>
<tr>
<th>2410476</th>
<td>2022-02-01</td>
<td>bridge</td>
<td>ru</td>
<td>obfs4</td>
<td>NaN</td>
<td>85</td>
<td>24723</td>
<td>24918</td>
</tr>
<tr>
<th>2410478</th>
<td>2022-02-01</td>
<td>bridge</td>
<td>ru</td>
<td>snowflake</td>
<td>NaN</td>
<td>85</td>
<td>2456</td>
<td>2456</td>
</tr>
<tr>
<th>...</th>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<th>2456826</th>
<td>2022-03-30</td>
<td>bridge</td>
<td>ru</td>
<td><OR></td>
<td>NaN</td>
<td>92</td>
<td>1700</td>
<td>1881</td>
</tr>
<tr>
<th>2456827</th>
<td>2022-03-30</td>
<td>bridge</td>
<td>ru</td>
<td>meek</td>
<td>NaN</td>
<td>92</td>
<td>1668</td>
<td>1675</td>
</tr>
<tr>
<th>2456828</th>
<td>2022-03-30</td>
<td>bridge</td>
<td>ru</td>
<td>obfs3</td>
<td>NaN</td>
<td>92</td>
<td>434</td>
<td>437</td>
</tr>
<tr>
<th>2456829</th>
<td>2022-03-30</td>
<td>bridge</td>
<td>ru</td>
<td>obfs4</td>
<td>NaN</td>
<td>92</td>
<td>32814</td>
<td>32994</td>
</tr>
<tr>
<th>2456831</th>
<td>2022-03-30</td>
<td>bridge</td>
<td>ru</td>
<td>snowflake</td>
<td>NaN</td>
<td>92</td>
<td>5164</td>
<td>5165</td>
</tr>
</tbody>
</table>
<p>288 rows × 8 columns</p>
</div>
First I plot statistics per transport. I plot frac and high metrics for each of them.
```python
import matplotlib.pyplot as plt
# Plot time series for each sensor with BROKEN state marked with X in red color
for t in transports:
serie = ru_ts[ru_ts.transport == t]
for m in metrics:
_ = plt.figure(figsize=(18,3))
_ = plt.plot(serie.date, serie[m], color='blue')
_ = plt.title("{} - {}".format(t,m))
_ = plt.gcf().autofmt_xdate()
for xc in serie.date:
plt.axvline(x=xc, color='black', linestyle='--')
_ = plt.axvline(x=xc, color='black', linestyle='--')
plt.show()
```
![output_6_0](/uploads/14e2c45f9d76d524acc9386de2018a2f/output_6_0.png)
![output_6_1](/uploads/baaad1ab15e67e6addb6ddb7cf4af8c2/output_6_1.png)
![output_6_2](/uploads/86c81b99998b5c84ffe6b8ab11f6d506/output_6_2.png)
![output_6_3](/uploads/4436565a4e09894e12c131e430f52440/output_6_3.png)
![output_6_4](/uploads/158beb4a36f01f4eadd6857ca1d0f2d2/output_6_4.png)
![output_6_5](/uploads/22eeabbbf4695c89583178ec74d34694/output_6_5.png)
![output_6_6](/uploads/d8e2f5da8f7ddf680d3ff8d31f98f753/output_6_6.png)
![output_6_7](/uploads/9858555f92e341fc47aeccb8fb3e2937/output_6_7.png)
Now I run a seasonal decomposition on the snowflake transport and high metric. I use a period of 8 days since in this paper https://arxiv.org/pdf/1507.05819.pdf they have identified a weekly seasionality for Tor users (which is generally the case for internet users).
```python
from statsmodels.tsa.seasonal import seasonal_decompose
serie = pd.DataFrame(ru_ts[ru_ts.transport == 'snowflake']['high'])
decompose_result_mult = seasonal_decompose(serie, model="multiplicative", extrapolate_trend='freq', period=8)
trend = decompose_result_mult.trend
seasonal = decompose_result_mult.seasonal
residual = decompose_result_mult.resid
_ = plt.figure(figsize=(18,10))
_ = plt.title("trend")
_ = trend.plot()
plt.show()
_ = plt.figure(figsize=(18,10))
_ = plt.title("seasonal")
_ = seasonal.plot()
plt.show()
_ = plt.figure(figsize=(18,10))
_ = plt.title("residual")
_ = residual.plot()
plt.show()
```
![output_7_0](/uploads/6e7b154501219271f89f8dc87e9b797b/output_7_0.png)
![output_7_1](/uploads/3956d7c83b05d23f30c1dbd8ca04e8c9/output_7_1.png)
![output_7_2](/uploads/1e5f80a70f909094e594ebd79f61dd70/output_7_2.png)
This last bit is some differentials I was playing with. It should be polished but gives an idea of how things change between one day and the next.
```python
for t in transports:
serie = ru_ts[ru_ts.transport == t]
for m in metrics:
_ = plt.figure(figsize=(18,3))
X = serie[m].values
diff = list()
for i in range(1, len(X)):
value = X[i] - X[i - 1]
diff.append(value)
_ = plt.plot(diff, color='blue')
_ = plt.title("{} - {}".format(t,m))
_ = plt.gcf().autofmt_xdate()
plt.show()
```
![output_8_0](/uploads/32e9e4ac72a2b0f8aaca68fb5814e105/output_8_0.png)
![output_8_1](/uploads/b8fa1d802e00e07223415ec4c2954462/output_8_1.png)
![output_8_2](/uploads/5cc2d509c785e2cbcfcb344668882fa3/output_8_2.png)
![output_8_3](/uploads/b611d66db2528371dc92c0c53ea7e471/output_8_3.png)
![output_8_4](/uploads/913ed4d87d38ee2a4ca40bb75b8becde/output_8_4.png)
![output_8_5](/uploads/60c0ef256b16e348757e57dad2091d76/output_8_5.png)
![output_8_6](/uploads/58c4765e4fac0d93dfabae5675939735/output_8_6.png)
![output_8_7](/uploads/7861799fe6e7132edde4f599c134f539/output_8_7.png)
```python
```HiroHirohttps://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40128Give standalone snowflakes guidance on how best to set up their nat2023-03-31T16:56:08ZRoger DingledineGive standalone snowflakes guidance on how best to set up their natAccording to our current broker stats (https://snowflake-broker.torproject.net/debug), we have
```
current snowflakes available: 3021
standalone proxies: 2589
browser proxies: 5
webext proxies: 250
unknown proxies: 177
NAT Types avai...According to our current broker stats (https://snowflake-broker.torproject.net/debug), we have
```
current snowflakes available: 3021
standalone proxies: 2589
browser proxies: 5
webext proxies: 250
unknown proxies: 177
NAT Types available:
restricted: 2512
unrestricted: 386
unknown: 123
```
i.e. most of the snowflakes that we're giving out seem to be standalone ones as opposed to browser extension ones, and also most of the ones we have available to us are behind restricted nat.
It seems to me that the standalone ones are probably in a better position to be behind the good kind of nat (or no nat at all). But does our docker image impose the bad kind of nat on them by default? How come so many standalone proxies are behind restricted nat?
More generally: is there useful guidance we can give people, on setting themselves up with the right kind of nat, presuming they're on a VPS or otherwise on a 'real' internet connection?shelikhooshelikhoohttps://gitlab.torproject.org/tpo/ux/team/-/issues/77Trustworthy mobile app distribution in target regions2024-01-10T18:41:27ZNathan FreitasTrustworthy mobile app distribution in target regionsWe must do better than "a link to an APK".
Need to consider all options
- include apps in f-droid.org main repo
- promote apps in guardian project f-droid repo
- start new "anti-censorship app store" repo
- create alternate branded app...We must do better than "a link to an APK".
Need to consider all options
- include apps in f-droid.org main repo
- promote apps in guardian project f-droid repo
- start new "anti-censorship app store" repo
- create alternate branded apps for distribution in region app storesNathan FreitasNathan Freitashttps://gitlab.torproject.org/tpo/applications/tor-browser/-/issues/40885Update tb-manual to Tor Browser's content style when bundled2024-03-05T19:03:02ZdonutsUpdate tb-manual to Tor Browser's content style when bundledWe decided in #31539 and #11698 to proceed with bundling the tb-manual in Tor Browser 11.5 (if possible). This ticket is a reminder to explore lightly restyling the page so it looks native to the browser, and less like a torproject.org w...We decided in #31539 and #11698 to proceed with bundling the tb-manual in Tor Browser 11.5 (if possible). This ticket is a reminder to explore lightly restyling the page so it looks native to the browser, and less like a torproject.org website.https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snowflake/-/issues/40126Visualize the Snowflake Network2023-03-31T19:00:35ZsereneVisualize the Snowflake NetworkTo further upgrade the main page, we could include a live visualization showing the strength of the Snowflake Network. (in a way which is scrubbed / anonymized of course, which the underlying metrics I believe always are.)
- It can show...To further upgrade the main page, we could include a live visualization showing the strength of the Snowflake Network. (in a way which is scrubbed / anonymized of course, which the underlying metrics I believe always are.)
- It can show how many people are currently helping, how many people are being helped.
- It would further assist in immediately making clear to new visitors what exactly Snowflake is/does, and how they can immediately be involved... whether as a volunteer proxy, as a user, as a dev, as a funder...
- It would cool.
I've not implemented this yet, but it's on my list. It would be an excellent addition to the landing page. See: #40125
I will update this ticket with a screenshot or demo soon.https://gitlab.torproject.org/tpo/web/community/-/issues/265[Training] Suggest possible training resources page revamps2023-01-30T20:27:43Zraya[Training] Suggest possible training resources page revamps<!-- This template is a great use for issues that are feature::additions or technical tasks for larger issues.-->
### Proposal
To suggest possible redesigns for the [training resources page](https://community.torproject.org/training/re...<!-- This template is a great use for issues that are feature::additions or technical tasks for larger issues.-->
### Proposal
To suggest possible redesigns for the [training resources page](https://community.torproject.org/training/resources/) within the community portal.
Current issues:
- no curation of content: page simply lists resources made by the Tor Project and community members
- absent timestamps (dates created, updated, etc.);
- absent filtering: resources in multiple languages are bundled together, etc.
Additionally, each resource could have:
- tags (e.g. "Tor Browser", "Tor Mobile", "Relays", "Tails"), which would aid in filtering;
- a separate sub-page which could house details such as: learning objectives, time estimate for training delivery, further reading suggestions, etc.
As well, some of the resources listed are in formats other than presentation decks, this includes: guides, zines, and long-reads. For example:
- [guide] https://freedom.press/training/-depth-guide-choosing-web-browser/
- [kit] https://cpj.org/2019/07/digital-safety-kit-journalists/
- [zine] https://www.derechosdigitales.org/wp-content/uploads/que-no-quede-huella.pdfSponsor 9 - Phase 6 - Usability and Community Intervention on Support for Democracy and Human Rightshttps://gitlab.torproject.org/tpo/core/tor/-/issues/40583Default SocksPort should bind to localhost IPv6 also2022-04-07T04:54:36Zs7rDefault SocksPort should bind to localhost IPv6 alsoCurrently a default `SocksPort 9050` will only bind to 127.0.0.1:9050.
While this is of course sufficient for most of cases, we might also want to bind by default to [::1]:9050 as well, along with 127.0.0.1. At some point maybe TBB will ...Currently a default `SocksPort 9050` will only bind to 127.0.0.1:9050.
While this is of course sufficient for most of cases, we might also want to bind by default to [::1]:9050 as well, along with 127.0.0.1. At some point maybe TBB will configure Firefox to use ::1 instead of 127.0.0.1. I don't think any decent OS does not have localhost IPv6 support in 2022.
Are there any reasons we might not want to do this?
Also, in an (I think outdated) manual version from 2019.www.tpo I can find this SocksPort flag with this description:
PreferIPv6
Tells exits that, if a host has both an IPv4 and an IPv6 address, we would prefer to connect to it via IPv6. (IPv4 is the default.)
I don't know if IPv4 is the default, because in the latest Tor when my clearnet destination has both IPv4 and IPv6 and the Exit node in my circuit has IPv6 exiting, it will connect (as it should) via IPv6. So we should change this to "IPv6 is the default" - as per RFC's recommendations.https://gitlab.torproject.org/tpo/anti-censorship/bridgedb/-/issues/40046add description of settings and telegram distribution mechanisms2022-05-04T11:28:58Zmeskiomeskio@torproject.orgadd description of settings and telegram distribution mechanismshttps://bridges.torproject.org/info is missing a description of the new distribution mechanisms, and the metrics relay search points to it.https://bridges.torproject.org/info is missing a description of the new distribution mechanisms, and the metrics relay search points to it.Sponsor 96: Rapid Expansion of Access to the Uncensored Internet through Tor in China, Hong Kong, & Tibetmeskiomeskio@torproject.orgmeskiomeskio@torproject.orghttps://gitlab.torproject.org/tpo/network-health/metrics/timeline/-/issues/7Automate metrics timeline update2023-03-16T13:28:15ZHiroAutomate metrics timeline updateWe should find a way to automate somehow the metrics pipeline update.
https://pulse.internetsociety.org/shutdowns has an api with events. We could query it once a week and import it in our timeline.
This could be the first external sou...We should find a way to automate somehow the metrics pipeline update.
https://pulse.internetsociety.org/shutdowns has an api with events. We could query it once a week and import it in our timeline.
This could be the first external source and we could add more sources as we see fit.
With this, we should move the current timeline out of the README and into a format that is easy to parse for a computer program.https://gitlab.torproject.org/tpo/web/donate-static/-/issues/69Update the donate.tpo page for a mystery grab bag swag campaign.2022-10-17T20:31:37Zal smithUpdate the donate.tpo page for a mystery grab bag swag campaign.We filed a ticket (#52) to modify the donation page for a mini-campaign during the year-end fundraising season, but, due to capacity all around, had to cancel it.
We'd like to revive this effort to launch some time in the next 6 weeks--...We filed a ticket (#52) to modify the donation page for a mini-campaign during the year-end fundraising season, but, due to capacity all around, had to cancel it.
We'd like to revive this effort to launch some time in the next 6 weeks--if that is possible with what's on @kez's plate.
I'm filing this ticket to:
1. Ask what kind of availability @kez has to both a) make these changes and b) revert these changes when done and;
2. Outline the work that I *believe* will be required to make the campaign happen.
I'm tagging @anarcat to help understand how this can fit into TPA's roadmap, and @eric for visibility of an upcoming project.
## Timeline
- Launch changes sometime within the next 6 weeks? When is possible?
- Revert 3 weeks later depending on donor interest
## Campaign needs
We will need to update https://donate.torproject.org in order to run a temporary campaign during which folks can donate $125 for a mystery hoodie & mystery t-shirt pack. Because of known limitations, these changes **do not need to change what kind of data is passed to CiviCRM.** Matt will parse who needs a mystery pack based on the timestamps.
- [x] Update swag image at $125 level (@nicob will provide asset)
- [x] Update the title text in the UI in the gift array from "T-Shirt Pack" -> "Mystery Swag Pack"
- [x] **Update the body text in the UI in the gift array from "Get this year's Privacy is a Human Right t-shirt and Use Tor t-shirt." -> "Get two or more pieces of mystery Tor clothing in sizes of your choice."** NEW!!
- [x] Update the text in the UI in "Your Info / Choose your size and fit" section "Privacy is a Human Right" -> "Mystery Item 1"; "Use Tor" -> "Mystery Item 2"
- [x] Hide the "3X" and "4X" options from all drop down boxes
- [x] Hide both "Select your fit" drop down boxes
- [ ] Test that selecting "Mystery Swag Pack" correctly imports donation amount and size selections into Civi
- [x] Launch the changes
- [ ] Revert the changes
## Assets
![site-swag-mystery-array](/uploads/e0f8cd68ed0e909d6b531be4539e9450/site-swag-mystery-array.png)2022-04-07https://gitlab.torproject.org/tpo/community/support/-/issues/40065Delivery of letters from TorProject to Russian email services2022-03-31T18:26:08ZninaDelivery of letters from TorProject to Russian email servicesEmail services in Russian can downgrade and send into spam emails from emails from certain senders.
Can we check once in a period the delivery of emails from TorProject?
This issue is to test if frontdesk@tpo replies are being delivere...Email services in Russian can downgrade and send into spam emails from emails from certain senders.
Can we check once in a period the delivery of emails from TorProject?
This issue is to test if frontdesk@tpo replies are being delivered to popular mail services in Russia.
The most popular email services in Russia:
Mail.ru and yandex.ruSponsor 125: Rapid Response Fund for Russia censorship circumventionninaninahttps://gitlab.torproject.org/tpo/community/support/-/issues/40064Archiving mailing lists managed by the Community Team2022-03-23T20:03:37ZGusArchiving mailing lists managed by the Community TeamWe're archiving these mailing lists managed by the Community Team.
* https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-teachers
* https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-community-team
* https://lists.torprojec...We're archiving these mailing lists managed by the Community Team.
* https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-teachers
* https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-community-team
* https://lists.torproject.org/cgi-bin/mailman/listinfo/global-south
Tasks:
- [ ] Send the email below
- [ ] Change the mailing list info
- [ ] Open a ticket on TPA/Team to archive the mailing list
> Title: Archiving $listname mailing list
>
> Hello Tor Community,
>
> As announced on the [Tor Project Blog](https://blog.torproject.org/tor-forum-a-new-discussion-platform/), we're currently reviewing and deprecating unused and unmaintained mailing lists. Today we're officially deprecating and archiving the [$listname](https://lists.torproject.org/cgi-bin/mailman/listinfo/$listname) mailing list. Users will be able to access and read the mailing list archive, but starting today, all new emails to $listname will be automatically bounced. Tor volunteers, contributors, researchers, developers, and enthusiasts are invited to join the Tor discussions on the new [Tor Forum](https://forum.torproject.net/).
>
> The Tor Forum is a new place for Tor user support, blog comments, UX feedback, and more. All discussions here are covered by the Tor Project [Code of Conduct](https://forum.torproject.net/t/welcome-to-the-tor-project-forum/7). For those who already have an email workflow, the new forum can receive emails from specific categories and reply through a mail client (aka ["mailing list mode"](https://meta.discourse.org/t/what-is-mailing-list-mode/46008)).
>
> It's important to note that popular and functional mailing lists like tor-dev, tor-project, and tor-relays will continue to work normally.
>
> Finally, Tor users can chat with us using Matrix/Element and join [Tor chat channels](https://support.torproject.org/get-in-touch/#irc-help).
>
> Welcome to the Tor Forum!
Related: https://gitlab.torproject.org/tpo/community/support/-/issues/40057GusGushttps://gitlab.torproject.org/tpo/tpa/team/-/issues/40625Clean up lektor-staging.tpo off of static mirrors2022-02-15T14:47:35ZJérôme Charaouilavamind@torproject.orgClean up lektor-staging.tpo off of static mirrors@emmapeel confirmed we can get rid of it, as it isn't in use anymore.@emmapeel confirmed we can get rid of it, as it isn't in use anymore.Jérôme Charaouilavamind@torproject.orgJérôme Charaouilavamind@torproject.org