... | @@ -116,3 +116,379 @@ Those are the ideas that were brought up in 2020 for 2021: |
... | @@ -116,3 +116,379 @@ Those are the ideas that were brought up in 2020 for 2021: |
|
|
|
|
|
* **january: roadmap approval** - still planned
|
|
* **january: roadmap approval** - still planned
|
|
* **march/april: anarcat vacation** - up in the air
|
|
* **march/april: anarcat vacation** - up in the air
|
|
|
|
|
|
|
|
# Survey results
|
|
|
|
|
|
|
|
This roadmap benefits from a user survey sent to `tor-internal@` in
|
|
|
|
December. This section discusses the results of that survey and tries
|
|
|
|
to draw general (qualitative) conclusions from that (quantitative)
|
|
|
|
data.
|
|
|
|
|
|
|
|
## Respondents information
|
|
|
|
|
|
|
|
* **26 responses**: 12 full, 14 partial
|
|
|
|
* **all paid workers**: 9 out of 10 respondents were paid by TPI, the
|
|
|
|
other was paid by another entity to work on Tor
|
|
|
|
* **roles**: of the 16 people that filled the "who are you section":
|
|
|
|
* programmers: 9 (75%)
|
|
|
|
* management: 4 (33%) included a free-formed "operations" here, which should probably be used in the next survey)
|
|
|
|
* documentation: 1 (8%)
|
|
|
|
* community: 1 (8%)
|
|
|
|
* "yes": 1 (as in: "yes I participate")
|
|
|
|
* (and yes, those add up to more than 100%, obviously, there is
|
|
|
|
some overlap, but we can note that sysadmins did *not* respond to
|
|
|
|
their own survey)
|
|
|
|
|
|
|
|
The survey should be assumed to represent mostly TPI employees, and
|
|
|
|
not the larger tor-internal or Tor-big-t community.
|
|
|
|
|
|
|
|
## General happiness
|
|
|
|
|
|
|
|
**No one is sad with us**! People are either happy (15, 58% of total,
|
|
|
|
83% responding), exuberant (3, 12%, 17% responding), or didn't answer.
|
|
|
|
|
|
|
|
Of those 18 people, 10 said the situation has improved in the last
|
|
|
|
year (56%) as well.
|
|
|
|
|
|
|
|
## General prioritization
|
|
|
|
|
|
|
|
The priority for 2021 should be, according to the 12 people who
|
|
|
|
answered:
|
|
|
|
|
|
|
|
* Stability: 6 (50%)
|
|
|
|
* New services: 3 (25%)
|
|
|
|
* Remove cruft: 1 (8%)
|
|
|
|
* "Making the interaction between TPA/dev smoother when new services are set up": 1 (8%)
|
|
|
|
* No answer: 1 (8%)
|
|
|
|
|
|
|
|
## Services to add or retire
|
|
|
|
|
|
|
|
People identified the following services as missing:
|
|
|
|
|
|
|
|
* Discord
|
|
|
|
* a full email stack, or at least outbound email
|
|
|
|
* discourse
|
|
|
|
* development/experimental VMs
|
|
|
|
* a "proper blog platform"
|
|
|
|
* "Continued enhancements to gitlab-lobby"
|
|
|
|
|
|
|
|
The following services had votes for retirement:
|
|
|
|
|
|
|
|
* git-rw (4, 33%)
|
|
|
|
* gitweb (4, 33%)
|
|
|
|
* SVN (3, 25%)
|
|
|
|
* blog (2, 17%)
|
|
|
|
* jenkins (2, 17%)
|
|
|
|
* fpcentral (1, 8%)
|
|
|
|
* schleuder (1, 8%)
|
|
|
|
* testnet (1, 8%)
|
|
|
|
|
|
|
|
## Service usage details and happiness
|
|
|
|
|
|
|
|
This section drills down into each critical service. A critical service here
|
|
|
|
is one that either:
|
|
|
|
|
|
|
|
* has at least one sad vote
|
|
|
|
* has a comment
|
|
|
|
* is used more than "monthly" on average
|
|
|
|
|
|
|
|
We have a lot of services: it's basically impossible to process all of
|
|
|
|
those in a reasonable time frame, and might not give us a lot more
|
|
|
|
information anyways, as far as this roadmap is concerned.
|
|
|
|
|
|
|
|
### Usage graph
|
|
|
|
|
|
|
|
![service-usage-hours](service-usage-hours.png)
|
|
|
|
|
|
|
|
### Happiness graph
|
|
|
|
|
|
|
|
![service-happiness-score](service-happiness-score.png)
|
|
|
|
|
|
|
|
### Key takeaways
|
|
|
|
|
|
|
|
* **GitLab** is a success, people want it expanded to **replace
|
|
|
|
git-rw/gitweb (Git hosting) and Jenkins (CI)**
|
|
|
|
* **email is a major problem**: people want a **Gmail replacement**, or at
|
|
|
|
least a way to deliver email without **being treated as spam**
|
|
|
|
* **CiviCRM is a problem**: it needs to **handle bounces** and we
|
|
|
|
have frustrations with our consultants here
|
|
|
|
* **RT receives a lot of spam** and makes people unhappy
|
|
|
|
* **schleuder is a problem**: tedious to use, unreliable, not sure
|
|
|
|
what the solution is
|
|
|
|
* people are **extremely happy with metrics.tpo**, and **happy with Big
|
|
|
|
Blue Button**
|
|
|
|
* the **main website is a success**, but there are concerns it still
|
|
|
|
**links to the old website**
|
|
|
|
* some people would like to use the **IRC bouncer** but **don't know how**
|
|
|
|
* the **blog is a problem**: **formatting issues and moderation** cause
|
|
|
|
significant pain, people suggest **migrating to Discourse and a
|
|
|
|
static blog**
|
|
|
|
* people want a **v3 onion.tpo** which is planned shortly
|
|
|
|
* **NextCloud** is a success, but the **collaborative edition is not
|
|
|
|
working for key people** who stay on other (proprietary/commercial)
|
|
|
|
services for collaboration. unclear what the solution is here.
|
|
|
|
|
|
|
|
In general, a lot of the problems related to email would benefit from
|
|
|
|
**splitting the email services into multiple servers**, something that
|
|
|
|
was previously discussed but should be prioritized in this year's
|
|
|
|
roadmap. In general, it seems the **delivery service should be put back
|
|
|
|
on the roadmap** this year as well.
|
|
|
|
|
|
|
|
### GitLab
|
|
|
|
|
|
|
|
GitLab is a huge accomplishment. It's the most used service, which is exceptional considering it has been deployed only in the last few months. Out of 11 respondents, everyone uses it at least weekly, and most (6), hourly. So it has already become a critical service!
|
|
|
|
|
|
|
|
Yet people are extremely happy with it. Out of those 11 people, everyone but a single soul has said they were happy with it which gives it one of the best happiness score of all services (rank #5)!
|
|
|
|
|
|
|
|
Most comments about GitLab were basically asking to move more stuff to it (git-rw/gitweb and Jenkins, namely), someone even suggesting we "force people to migrate to Gitlab". In particular, it seems we should look at retiring Jenkins in 2021: only one user (monthly), and an unhappy comment suggesting to migrate...
|
|
|
|
|
|
|
|
The one critic about the service is "too much URL nesting" and that it is hard to find things, since they do not map to the git-rw project hierarchy.
|
|
|
|
|
|
|
|
So GitLab is a win. We need to make sure it keeps running and probably expand it in 2021.
|
|
|
|
|
|
|
|
It should be noted, however, that Gitweb and Gitolite (git-rw), as a service, are one of the most frequently used (4th and 5th place, respectively) and one that makes people happy (10/10, 3rd place and 8/8, 9th place) so if/when we replace those service, we should be very careful that the web interface remains useful. One comment that may summarize the situation is:
|
|
|
|
|
|
|
|
> Happy with gitolite and gitweb, but hope they will also be migrated to gitlab.
|
|
|
|
|
|
|
|
### Email and lists
|
|
|
|
|
|
|
|
Email services are pretty popular: email and lists come second and third, right after GitLab! People are unanimously happy with the mailing lists service (which may be surprising), but the happiness degrades severely when we talk about "email" in general. Most people (5 out 7 respondants) are "sad" about the email service.
|
|
|
|
|
|
|
|
Comments about email are:
|
|
|
|
|
|
|
|
* "I don’t know enough to get away from Gmail"
|
|
|
|
* "Majority of my emails sent from my @tpo ends up in SPAM"
|
|
|
|
* "would like to have outgoing DKIM email someday"
|
|
|
|
|
|
|
|
So "fixing email" should probably be the top priority for 2021. In particular, we should be better at not ending up in spam filters (which is hard), provide an alternative to Gmail (maybe less hard), or at least document alternatives to Gmail (not hard).
|
|
|
|
|
|
|
|
### RT
|
|
|
|
|
|
|
|
While we're talking about email, let's talk about Request Tracker, a lesser-known service (only 4 people use it, and 4 declared never using it), yet intensively used by those people (one person uses it hourly!), so it desserves special attention. Most of its users (3 out of 5) are unhappy with it. The concerns are:
|
|
|
|
|
|
|
|
* "Some automated ticket handling or some other way to manage the
|
|
|
|
high level of bounce emails / tickets that go to donations@ would
|
|
|
|
make my sadness go away"
|
|
|
|
* "Spam": presumably receiving too much spam in the queues
|
|
|
|
|
|
|
|
### CiviCRM
|
|
|
|
|
|
|
|
Let's jump the queue a little (we'll come back to BBB and IRC below) and talk about the 9th most used service: CiviCRM. This is one of those services that is used by few of our staff, but done so intensively (one person uses it hourly). And considering how important its service is (donations!), it probably desserves to be higher priority. 2 people responded on the happiness scale, strangely, one happy and one unhappy.
|
|
|
|
|
|
|
|
A good summary of the situation is:
|
|
|
|
|
|
|
|
> The situation with Civi, and our donate.tpo portal, is a grand source of sadness for me (and honestly, our donors), but I think this issue lies more with the fact that the control of this system and architecture has largely been with Giant Rabbit and it’s been like pulling teeth to make changes. Civi is a fairly powerful tool that has a lot of potential, and I think moving away from GR control will make a big difference.
|
|
|
|
|
|
|
|
In generaly, it seems the spam, bounce handling and email delivery issues mentioned in the email section apply here as well. Migrating CiviCRM to start handling bounces and deliver its own emails will help delivery for other services, reduce abuse complaints, make CiviCRM work better, and generally improve everyone's life so it should definitely be prioritized.
|
|
|
|
|
|
|
|
### Big Blue Button
|
|
|
|
|
|
|
|
One of those intensively used service by many people (rank #7): 10 people use it, 2 monthly, 3 weekly and 5 daily! It's also one of those most "happy" services: 10 people responded they were happy with the service, which makes it the second-most happy service!
|
|
|
|
|
|
|
|
No negative comments, great idea, great deployment, nothing to fix here, it seems.
|
|
|
|
|
|
|
|
### IRC
|
|
|
|
|
|
|
|
The next service in popularity is IRC (rank #8), used by 3 people (hourly, weekly and monthly, somewhat strangely). The main comment was about the lack of usability:
|
|
|
|
|
|
|
|
> IRC Bouncer: I’d like to use it! I don’t know how to get started, and I am sure there is documentation somewhere, but I just haven’t made time for it and now it’s two years+ in my Tor time and I haven’t done it yet.
|
|
|
|
|
|
|
|
I'll probably just connect that person with the IRC bouncer maintainer and pretend there is nothing else to fix here. I honestly expected someone to request us to setup Matrix server (and someone *did* suggest setting up a "Discord" server, so that might be it), but it didn't get explicitly mentioned, so not a priority, even if it's heavily used.
|
|
|
|
|
|
|
|
### Main website
|
|
|
|
|
|
|
|
The new website is a great success. It's the 7th most used service according to our metrics, and also one that makes people the happiest (7th place).
|
|
|
|
|
|
|
|
The single negative comment on the website was "transition still not complete: links to old site still prominent (e.g. Documentation at the top)".
|
|
|
|
|
|
|
|
Maybe we should make sure more resources are transitionned to the new website (or elsewhere) in 2021.
|
|
|
|
|
|
|
|
### Metrics
|
|
|
|
|
|
|
|
The metrics.torproject.org site is the service that makes people the happiest, in all the services surveyed. Of the 11 people that answered, *all* of them were happy with it. It's one of the most used services all around, at place #4.
|
|
|
|
|
|
|
|
### Blog
|
|
|
|
|
|
|
|
People are pretty frustrated by the blog. of **all** people that answered the "happiness" question, **all** said they were "sad" about the service. in the freeform, comments mentioned:
|
|
|
|
|
|
|
|
* "comment formatting still not fixed", "never renders properly"
|
|
|
|
* [needs something to] produce link previews (in a privacy preserving way)
|
|
|
|
* "The comment situation is totally unsustainable but I feel like that’s a community decision vs. sysadmin thing", "comments are awful", "Comments can get out of hand and it's difficult to have productive conversations there"
|
|
|
|
* "not intuitive, difficult to follow"
|
|
|
|
* "difficult to find past blog posts[...]: no [faceted search or sort by date vs relevance]"
|
|
|
|
|
|
|
|
A positive comment:
|
|
|
|
|
|
|
|
* I like Drupal and it’s easy to use for me
|
|
|
|
|
|
|
|
A good summary has been provided: "drupal: everyone is unhappy with the solution right now: hard to do moderation, etc. Static blog + Discourse would be better."
|
|
|
|
|
|
|
|
I outline the blog first because it's one of the most frequently used service, yet it's one of the "saddest", so it should probably be made a priority in 2021.
|
|
|
|
|
|
|
|
### NextCloud
|
|
|
|
|
|
|
|
People are generally (77% of 9 respondents) happy with this popular service (rank 14, used by 9 people, 1 yearly, 2 monthly, 4 weekly, 2 daily).
|
|
|
|
|
|
|
|
Pain points:
|
|
|
|
|
|
|
|
* discovery problems:
|
|
|
|
> Discovering what documents there are is not easy; I wish I had a view of some kind of global directory structure. I can follow links onto nextcloud, but I never ever browse to see what's there, or find anything there on my own.
|
|
|
|
* shared documents are too unreliable:
|
|
|
|
> I want to love NextCloud because I understand the many benefits, but oh boy, it’s a problem for me, particularly in shared documents. I constantly lose edits, so I do not and cannot rely on NextCloud to write anything more serious than meeting notes. Shared documents take 3-5 minutes to load over Tor, and 2+ minutes to load outside of Tor. The flow is so clunky that I just can’t use it regularly other than for document storage.
|
|
|
|
|
|
|
|
> I've ran into sync issues with a lot of users using the same pad at once. These forced us to not use nextcloud for collab in my team except when really necessary.
|
|
|
|
|
|
|
|
So overall Nextcloud is heavily used, but has serious reliability problems that keep it from correctly replacing Google Docs for collaboration. It is unclear which way forward we can take here without getting involved into hosting the service or upstream development, neither of which are likely to be an option for 2021.
|
|
|
|
|
|
|
|
### onion.tpo
|
|
|
|
|
|
|
|
Somewhat averagely popular service (rank 26), mentioned here because two people were unhappy with it as it "seems not maintained" and "would love to have v3 onions, I know the reason we don't have yet, but still, this should be a priority".
|
|
|
|
|
|
|
|
And thankfully, the latter is a priority that was originally aimed at 2020, but should be delivered in 2021 for sure. Unclear what to do about that other concern.
|
|
|
|
|
|
|
|
### Schleuder
|
|
|
|
|
|
|
|
3 people responded on the happiness scale, and all were sad. Those three (presumably) use the service yearly, monthly and weekly, respectively, so it's not as important (27th service in poplarity) as the blog (3rd service!), yet I mention it here because of the severity of the unhappiness.
|
|
|
|
|
|
|
|
Comments were:
|
|
|
|
|
|
|
|
* "breaks regularly and tedious to update keys, add or remove people"
|
|
|
|
* "GPG is awful and I wish we could get rid of it"
|
|
|
|
* "tracking who has responded and who hasn't (and how to respond!) is nontrivial"
|
|
|
|
* "applies encryption to unencrypted messages, which have already gone over the wire in the clear. This results in a huge amount of spam in my inbox"
|
|
|
|
|
|
|
|
In general, considering no one is happy with the service, we should consider looking for alternatives, plain retirement, or really fixing those issues. Maybe making it part of a "big email split" where the service runs on a different server (with service admins having more access) would help?
|
|
|
|
|
|
|
|
### Ignored services
|
|
|
|
|
|
|
|
I stopped looking at services below the 500 hours threshold or so (technically: after the first 20 services, which puts the mark at 350 hours). I made an exception for any service with a "sad" comment.
|
|
|
|
|
|
|
|
So those services were above the defined thresholds but were ignored
|
|
|
|
above.
|
|
|
|
|
|
|
|
* DNS: one person uses it "hourly", and is "happy", nothing to changes
|
|
|
|
* Community portal: largely used, users happy, no change suggested
|
|
|
|
* consensus-health: same
|
|
|
|
* support portal and tb manual: generally happy, well used, except "FAQ answers don't go into *why* enough and only regurgitate the surface-level advice. Moar links to support claims made" - should be communicated to the support team
|
|
|
|
* debian package repository: "debian package not usable", otherwise people are happy
|
|
|
|
* someone was unhappy about backups, but did not seem to state why
|
|
|
|
* research: very little use, comment: "whenever I need to upload something to research.tpo, it seems like I need to investigate how to do so all over again. This is probably my fault for not remembering? "
|
|
|
|
* media: people are unhappy about it: "it would be nice to have something better than what we have now, which is an old archive" and "unmaintained", but it's unclear how to move forward on this from TPA's perspective
|
|
|
|
* fpcentral: one yearly user, one unhappy person suggested to retire it, which is already planned (https://gitlab.torproject.org/tpo/tpa/team/-/issues/40009)
|
|
|
|
|
|
|
|
Every other service not mentioned here should consider itself "happy". In particular, people are generally happy with websites, TPA and metrics services overall, so congratulations to every sysadmin and service admin out there and thanks for your feedback for those who filled in the survey!
|
|
|
|
|
|
|
|
## Notes for the next survey
|
|
|
|
|
|
|
|
* **average time: 16 minutes** (median: 14 min). much longer
|
|
|
|
than the estimated 5-10 minutes.
|
|
|
|
* unsurprisingly, the **biggest time drain was the service group**,
|
|
|
|
taking between 10 and 20 minutes
|
|
|
|
* maybe remove or merge some services next time?
|
|
|
|
* remove the "never" option for the service? same as not answering...
|
|
|
|
* the **service group responses are hard to parse** - each *option*
|
|
|
|
ends up being a separate *question* and required a lot more
|
|
|
|
processing than can just be done directly in limesurvey
|
|
|
|
* worse: the **data is mangled** up together: the "happiness" and
|
|
|
|
"frequency" data is interleaved which required some annoying data
|
|
|
|
massaging after - might be better to split those in two next time?
|
|
|
|
* consider an automated Python script to extract the data from the
|
|
|
|
survey next time? processing took about 8 hours this time around,
|
|
|
|
consider [xkcd 1205](https://xkcd.com/1205/) of course
|
|
|
|
* everyone who answered that question (8 out of 12, 67%) **agreed to do
|
|
|
|
the survey again next year**
|
|
|
|
|
|
|
|
Obviously, at least one person correctly identified that the "survey
|
|
|
|
could use some work to make it less overwhelming." Unfortunately, no
|
|
|
|
concrete suggestion on how to do so was provided.
|
|
|
|
|
|
|
|
### How the survey data was processed
|
|
|
|
|
|
|
|
Most of the questions were analyzed directly in Limesurvey by:
|
|
|
|
|
|
|
|
1. [visiting the admin page](https://survey.torproject.org/index.php/admin/survey/sa/view/surveyid/771333), then [responses and statistics](https://survey.torproject.org/index.php/admin/responses/sa/index/surveyid/771333),
|
|
|
|
then the [statistics page](https://survey.torproject.org/index.php/admin/statistics/sa/index/surveyid/771333)
|
|
|
|
2. in the stats page, check the following:
|
|
|
|
* Data selection: Include "all responses"
|
|
|
|
* Output options:
|
|
|
|
* Show graphs
|
|
|
|
* Graph labels: Both
|
|
|
|
* In the "Response filters", pick everything but the "Services
|
|
|
|
satisfaction and usage" group
|
|
|
|
3. click "View statistics" on top
|
|
|
|
|
|
|
|
Then we went through the results and described those manually here. We
|
|
|
|
could also have exported a PDF but it seemed better to have a
|
|
|
|
narrative.
|
|
|
|
|
|
|
|
The "Services satisfaction and usage" group required more work. On top
|
|
|
|
of the above "statistics" page (just select that group, and group in
|
|
|
|
one column for easier display), which is important to verify things (and
|
|
|
|
have access to the critical comments section!), the data was exported
|
|
|
|
as CSV with the following procedure:
|
|
|
|
|
|
|
|
1. in [responses and statistics](https://survey.torproject.org/index.php/admin/responses/sa/index/surveyid/771333) again, pick Export -> [Export
|
|
|
|
responses](https://survey.torproject.org/index.php/admin/export/sa/exportresults/surveyid/771333)
|
|
|
|
2. check the following:
|
|
|
|
* Headings:
|
|
|
|
* Export questions as: **Question code**
|
|
|
|
* Responses:
|
|
|
|
* Export answers as: **Answer codes**
|
|
|
|
* Colums:
|
|
|
|
* Select colums: use shift-click to select the right question
|
|
|
|
set
|
|
|
|
3. then click "export"
|
|
|
|
|
|
|
|
The resulting CSV file was imported in LibreOffice and mangled with a
|
|
|
|
bunch of formulas and graphs. Originally, I used this logic:
|
|
|
|
|
|
|
|
* for the happy/sad questions, I assigned one point to "Happy" answers and -1 points to "Sad" answers.
|
|
|
|
* for the usage, I followed the question codes:
|
|
|
|
* A1: never
|
|
|
|
* A2: Yearly
|
|
|
|
* A3: Monthly
|
|
|
|
* A4: Weekly
|
|
|
|
* A5: Daily
|
|
|
|
* A6: Hourly
|
|
|
|
|
|
|
|
For usage the idea is that a service still gets a point if someone
|
|
|
|
answered "never" instead of just skipping it. It shows acknowledgement
|
|
|
|
of the service's existence, in some way, and is better than not
|
|
|
|
answering at all, but not as good as "once a year", obviously.
|
|
|
|
|
|
|
|
I changed the way values are computed for the frequency scores. The
|
|
|
|
above numbers are quite meaningless: GitLab was at "60" which could
|
|
|
|
mean 10 people using it hourly *or* 20 people using it weekly, which
|
|
|
|
is a vastly different usage scenario.
|
|
|
|
|
|
|
|
Instead, i've come up with a magic formula: `H = 10*5^(A-3)` (let's see if katex can render this properly:
|
|
|
|
|
|
|
|
```math
|
|
|
|
H = 10*5^(A-3)
|
|
|
|
```
|
|
|
|
|
|
|
|
looks like no: that `(A-3)` should be superscript. anyways.)
|
|
|
|
|
|
|
|
This gives us the following values, which somewhat fit a number of
|
|
|
|
hours a year for the given frequency:
|
|
|
|
|
|
|
|
* A1 ("never"): 0.4
|
|
|
|
* A2 ("yearly"): 2
|
|
|
|
* A3 ("monthly"): 10
|
|
|
|
* A4 ("weekly"): 50
|
|
|
|
* A5 ("daily"): 250
|
|
|
|
* A6 ("hourly"): 1250
|
|
|
|
|
|
|
|
Obviously, there are more than 250 days and 1250 hours in a year, but
|
|
|
|
if you count for holidays and lost cycles, and squint a little, it
|
|
|
|
kind of works. Also, "Never" should probably be renamed to "rarely" or
|
|
|
|
just removed in the next survey, but it still reflects the original
|
|
|
|
idea of giving credit to the "recognition" of the service.
|
|
|
|
|
|
|
|
This gives us a much better approximation of the number of
|
|
|
|
hours-person each service is used per year and therefore which service
|
|
|
|
should be prioritized. I also believe it better reflects actual use: I
|
|
|
|
was surprised to see that gitweb and git-rw are used equally by the
|
|
|
|
team, which the previous calculation told us. The new ones seem to
|
|
|
|
better reflect actual use (3 monthly, 1 weekly, 6 daily vs 1 monthly,
|
|
|
|
2 weekly, 3 daily, 2 hourly, respectively). |