The Tor Project issueshttps://gitlab.torproject.org/groups/tpo/-/issues2024-03-25T11:39:56Zhttps://gitlab.torproject.org/tpo/core/arti/-/issues/496Better coarsetime strategy needed2024-03-25T11:39:56ZIan Jacksoniwj@torproject.orgBetter coarsetime strategy neededI was writing some channel padding computations and missed `.checked*` functions on the types in `coarsetime`. I investigated a bit and I discovered that the crate is full of unchecked arithmetic on `u64`. I think these are silent wrap...I was writing some channel padding computations and missed `.checked*` functions on the types in `coarsetime`. I investigated a bit and I discovered that the crate is full of unchecked arithmetic on `u64`. I think these are silent wrapping overflow in release builds, and panics in debug builds. There are none of the customary `Panics` sections in the docs.
Also, there are no impls of the obvious conversions to or from `std::time`.
I don't consider myself impressed. We should decide what to do about this.Arti 1.0.0: Ready for production usehttps://gitlab.torproject.org/tpo/core/arti/-/issues/481Use of `nf_conntimeout_clients` seems incorrect2023-06-13T17:43:00ZNick MathewsonUse of `nf_conntimeout_clients` seems incorrectInspecting C Tor and `padding-spec`, it appears that we're setting `unused_client_circ_timeout_while_learning_cbt` incorrectly. Right now it's set to `nf_conntimeout_clients`, which isn't supposed to be used for that.Inspecting C Tor and `padding-spec`, it appears that we're setting `unused_client_circ_timeout_while_learning_cbt` incorrectly. Right now it's set to `nf_conntimeout_clients`, which isn't supposed to be used for that.Arti 1.0.0: Ready for production useNick MathewsonNick Mathewsonhttps://gitlab.torproject.org/tpo/core/arti/-/issues/479async cancellation hazards - consider select_safe! or other countermeasures2022-08-11T20:43:06ZIan Jacksoniwj@torproject.orgasync cancellation hazards - consider select_safe! or other countermeasuresAs
[discussed](https://docs.rs/tokio/1.18.2/tokio/macro.select.html#cancellation-safety)
[in various](https://www.chiark.greenend.org.uk/~ianmdlvl/rust-polyglot/async.html#cancellation-safety)
[places](https://gist.github.com/Matthias247...As
[discussed](https://docs.rs/tokio/1.18.2/tokio/macro.select.html#cancellation-safety)
[in various](https://www.chiark.greenend.org.uk/~ianmdlvl/rust-polyglot/async.html#cancellation-safety)
[places](https://gist.github.com/Matthias247/ffc0f189742abf6aa41a226fe07398a8#cancellation-in-async-rust)
there is a hazard with futures, particularly with `select!` (or constructs and combinators with similar semantics).
It would be good to do something to try to reduce the risk of us writing those bugs. Sadly this is an open research problem in upstream Rust. However as part of the discussion surrounding !514 we had some ideas.
The most promising proposal was to introduce a `select_safe!` macro which would wrap `select!`, and arrange for the arms' body blocks to not be able to `await`. This wouldn't be perfect but it would perhaps force cancellation-unsafety bugs to be written in an unnatural way that would be spotted during code review. There are some difficulties with this, notably that implementing this using closures would probably break `?` type inference unless the macro was told the surrounding error type. And that `select!` has a complex argument synntax whose parsing we might have to reimplement a lot of.Arti 1.0.0: Ready for production useIan Jacksoniwj@torproject.orgIan Jacksoniwj@torproject.orghttps://gitlab.torproject.org/tpo/core/arti/-/issues/478Scripted tests for bootstrap and other failures2022-05-24T17:51:55ZNick MathewsonScripted tests for bootstrap and other failuresWith #329 I used `arti-testing` to get a variety of measurements for how arti behaves under different network failure conditions.
That's well and good, but it would be nice to have some kind of script to automate taking those measuremen...With #329 I used `arti-testing` to get a variety of measurements for how arti behaves under different network failure conditions.
That's well and good, but it would be nice to have some kind of script to automate taking those measurements, so that we can re-run them in the future. They took a long time, so I'm not sure that this should become part of CI.Arti 1.0.0: Ready for production usehttps://gitlab.torproject.org/tpo/core/arti/-/issues/470Current DirMgrConfig makes every new config section a breaking change2022-07-07T15:37:00ZNick MathewsonCurrent DirMgrConfig makes every new config section a breaking changeDue to the current structure of `DirMgrConfig`, any time I want to add a new configuration section to `DirMgrConfig`, this causes a breaking change. That seems undesirable: usually, adding new features should be possible without a break...Due to the current structure of `DirMgrConfig`, any time I want to add a new configuration section to `DirMgrConfig`, this causes a breaking change. That seems undesirable: usually, adding new features should be possible without a breaking API change. We have managed to avoid this property with our config system as a whole.
As I recall, the rationale for structuring `DirMgrConfig` in this way was to make absolutely certain that we couldn't introduce a new configuration section in `DirMgrConfig` while forgetting to add it to `TorConfig`. Any new solution should try to retain that property.
cc @DizietArti 1.0.0: Ready for production usehttps://gitlab.torproject.org/tpo/core/arti/-/issues/450Test Windows behavior on CI, maybe with Wine2024-03-04T05:30:12ZNick MathewsonTest Windows behavior on CI, maybe with WineAs we start to get platform-specific functionality (eg, file permission checking), it would be a good idea to make sure that our tests pass on Windows.
I'd like it if the CI had a nightly task to run our tests, either on a real Windows ...As we start to get platform-specific functionality (eg, file permission checking), it would be a good idea to make sure that our tests pass on Windows.
I'd like it if the CI had a nightly task to run our tests, either on a real Windows VM, or under Wine. The build could be native or cross-compiled.Arti 1.0.0: Ready for production usehttps://gitlab.torproject.org/tpo/core/arti/-/issues/444syslog logging support2023-04-25T15:46:41ZIan Jacksoniwj@torproject.orgsyslog logging supportWe should support logging via syslog. Prompted by #362 but IMO this is self-contained in implementation so makes sense to track separately.We should support logging via syslog. Prompted by #362 but IMO this is self-contained in implementation so makes sense to track separately.Arti 1.0.0: Ready for production useIan Jacksoniwj@torproject.orgIan Jacksoniwj@torproject.orghttps://gitlab.torproject.org/tpo/core/arti/-/issues/443Reconfiguration, particularly socks and dns ports2022-10-20T21:12:38ZIan Jacksoniwj@torproject.orgReconfiguration, particularly socks and dns portsWe should be able to add and remove socks and dns listeners. And the existing reconfiguration code's approach to error handling and reporting is not very principled.
!440 is a stab at this but is currently postponed.We should be able to add and remove socks and dns listeners. And the existing reconfiguration code's approach to error handling and reporting is not very principled.
!440 is a stab at this but is currently postponed.Arti 1.0.0: Ready for production useIan Jacksoniwj@torproject.orgIan Jacksoniwj@torproject.orghttps://gitlab.torproject.org/tpo/core/arti/-/issues/440Discard consensus if certificates can't be found.2022-10-06T14:25:16ZNick MathewsonDiscard consensus if certificates can't be found.If we try many times to find the certificates for a given consensus, and everybody tells us 404, then it's possible we've been lied to about the signing keys. But currently arti will just keep looking for the certificates forever.
Two ...If we try many times to find the certificates for a given consensus, and everybody tells us 404, then it's possible we've been lied to about the signing keys. But currently arti will just keep looking for the certificates forever.
Two answers here are:
* remember where we got the consensus from, and ask that same source for the certificates we don't have. Mark it as very naughty if it can't tell us, and drop the consensus.
* if we try a lot of times to get the certificates for a consensus, and everybody tells us "no cert with that signing key", then discard the consensus or try to get a new one.Arti 1.0.0: Ready for production useNick MathewsonNick Mathewsonhttps://gitlab.torproject.org/tpo/core/arti/-/issues/436Early rejection for consensus documents we would never accept as timely2022-12-25T02:06:02ZNick MathewsonEarly rejection for consensus documents we would never accept as timelyIf we are served a consensus document which we believe to be untimely, we won't accept it. But we'll read the whole dang thing anyway, even though the part of it declaring its lifespan is right near the beginning.
We could save wasted ...If we are served a consensus document which we believe to be untimely, we won't accept it. But we'll read the whole dang thing anyway, even though the part of it declaring its lifespan is right near the beginning.
We could save wasted bandwidth by inspecting our consensus downloads early on, and aborting them with an error if, based on the first 1-4k, it looks like the prefix of an expired or not-yet-valid consensus.
~~Alternatively, if we see that the directory we're talking to is skewed by more than a certain amount, we might simply refuse to send it any consensus request at all, on the theory that any consensus it would accept, we wouldn't.~~ (This alternative is now #466)
Related to #329.Arti 1.0.0: Ready for production usehttps://gitlab.torproject.org/tpo/core/arti/-/issues/433Smarter timing for dirmgr downloads and retries2023-02-27T12:00:06ZNick MathewsonSmarter timing for dirmgr downloads and retriesCurrently there are multiple `RetryDelay` timers and other timers used in `dirmgr`. We should document them better, and simplify how they work. I was thinking of doing this as part of #329, but @eta may get to it first as part of #90.
...Currently there are multiple `RetryDelay` timers and other timers used in `dirmgr`. We should document them better, and simplify how they work. I was thinking of doing this as part of #329, but @eta may get to it first as part of #90.
Here's how it _should_ work:
So, there are two main things happening in `dirmgr`: we try to fetch a complete directory _now_, and once that directory is old, we try to fetch a new one.
# Fetching a directory
For background: a directory consists of multiple documents: A consensus document, a set of certificates that sign the consensus, and a set of microdescriptors whose digests are listed in the consensus. We download them in that order.
To fetch a new directory, we _currently_ do (approximately) this algorithm:
```
initialize RetryDelay "outer" and RetryDelay "inner".
while directory is incomplete or too old:
* Figure out what documents are missing.
* Try to download them
* On success, try to validate and store them.
* If that succeeded, and we learned something from it: continue.
* If the download or validation, or if we learned nothing:
* If we have failed too many times since we last "reset" the directory,
"reset" the directory (set it to empty), reset "inner", and wait for
the next delay from "outer".
* Otherwise, wait for the next delay from "inner".
The directory is complete; declare victory.
```
There are a few problems with that algorithm. First, it is too happy to reset. The only case in which we should ever consider resetting the directory is when we have a consensus, but we can't find the certificates to authenticate it. (This case probably means that the consensus was never valid to begin with, and the certificates mentioned don't exist, so we should get a new consensus.)
The second problem is that when things are going wrong, it is too happy to reset its `RetryDelay` objects, and so it doesn't back off correctly.
Here is a better algorithm:
```
Initialize a RetryDelay.
Initialize n_failures to 0.
while the directory is incomplete or too old:
* Figure out what documents are missing.
* Try to download them.
* On success, try to validate and store them.
* If that succeeded, and we learned something from it:
* Set "n_failures" to 0, and continue.
* If the download or validation failed, or if we learned nothing:
* Increment n_failures.
* If n_failures is above some threshold, and the current state is
resettable♮, reset the directory (set it to empty).
* Wait for the next delay from our RetryDelay.
The directory is complete; declare victory.
♮ The "downloading certs" state is resettable; other states are not.
```
This change would mostly involve refactoring the function `bootstrap::download()`.
# Waiting for the next directory
Every consensus document has a "lifetime", defined using three times: "Valid-After", "Fresh-Until", and "Valid-Until". The idea is that the consensus can be used safely at all times from "Valid-After" through "Valid-Until", and that you shouldn't even think of replacing it until after "Fresh-Until".
Whenever we have a complete directory, we wait until a randomly chosen time between "Fresh-Until" and "Valid-Until" before we start a new download attempt. (The time is randomly chosen to avoid a "thundering herd" of clients all trying to downlaod at once.)
When we have an incomplete directory, we should re-start trying to download a new directory if the consensus we have is one that we would want to replace anyway.
One consideration here: with these times, we're waiting until wallclock times, not until `Instant`s. That means we need to be prepared for the possibility of a clock jump. (For example, if somebody resets their clock, or the computer sleeps and wakes up). We currently use `sleep_until_wallclock` for that; you could probably implement something similar with `TaskHandle`.
----
And there you have it; it's not too complicated, but it is a bit hairy. I think it ought to be feasible to do this with the `TaskHandle` API.Arti 1.0.0: Ready for production usehttps://gitlab.torproject.org/tpo/core/arti/-/issues/426Mechanism to warn loudly when running with dubious features enabled2022-05-23T17:43:55ZNick MathewsonMechanism to warn loudly when running with dubious features enabledAs discussed on !442, some of the features on our crates are for testing only: they shouldn't be used for production code at all.
We should provide some mechanism to give a loud warning if the user is running when built with any of tho...As discussed on !442, some of the features on our crates are for testing only: they shouldn't be used for production code at all.
We should provide some mechanism to give a loud warning if the user is running when built with any of those features. This mechanism probably needs to exist at the crate level, or no higher than the `arti-client` crate.Arti 1.0.0: Ready for production usehttps://gitlab.torproject.org/tpo/core/arti/-/issues/424netdoc: Do _something_ with ConsensusVoterInfo.2022-12-15T17:22:13ZNick Mathewsonnetdoc: Do _something_ with ConsensusVoterInfo.The `ConsensusVoterInfo` structs inside a consensus document capture information about who the voters believe that the voters are. Right now, the code does nothing with these structs except for parsing them.
We should probably have som...The `ConsensusVoterInfo` structs inside a consensus document capture information about who the voters believe that the voters are. Right now, the code does nothing with these structs except for parsing them.
We should probably have some kind of accesors for these structs, so that they can be inspected and/or cloned into new documents.
Conceivably, we should alert the user if the list of authorities in the consensus is other than the list that the software recognizes.
Conceivably, we should somehow check the list of signatures on the consensus against the list of voters. (This sounds like it would be a ~Security issues, but I think it isn't: if the authorities we believe in have signed a document that the client shouldn't believe, that's outside our threat model.)Arti 1.0.0: Ready for production usehttps://gitlab.torproject.org/tpo/core/arti/-/issues/419Review module structure, API, and docs, of arti library crate2022-05-23T17:43:45ZIan Jacksoniwj@torproject.orgReview module structure, API, and docs, of arti library crateNow that the arti command line tool's configuration is moving to the `arti` crate (#285, #375, !421), it is sharing a library crate API with some ad-hoc functions that were made pub when the `arti` library crate was created.
This resuli...Now that the arti command line tool's configuration is moving to the `arti` crate (#285, #375, !421), it is sharing a library crate API with some ad-hoc functions that were made pub when the `arti` library crate was created.
This resuling overall API ought to be reviewed.Arti 1.0.0: Ready for production usehttps://gitlab.torproject.org/tpo/core/arti/-/issues/416Consider alternate designs for traits and APIs for stream isolation2022-05-23T17:44:33ZNick MathewsonConsider alternate designs for traits and APIs for stream isolationThis Tuesday, @eta, @diziet, and I talked about different possible interfaces for the new stream isolation APIs. This ticket exists mainly to record our discussion for future reference. See also #414 for other stream isolation followup...This Tuesday, @eta, @diziet, and I talked about different possible interfaces for the new stream isolation APIs. This ticket exists mainly to record our discussion for future reference. See also #414 for other stream isolation followups.
(cc @trinity-1686a)
Right now there are two traits used for isolation: `Isolation` and `IsolationHelper`. The `Isolation` trait applies to all possible stream isolation values, and is used internally to hold "the real underlying isolation setting of a stream or circuit". The `IsolationHelper` trait is used to implement a particular type-oriented kind of isolation: every type of `IsolationHelper` is incompatible with `Isolation`s of any other type. We expect that users will (almost?) always want to just implement `IsolationHelper`.
The questions we were trying to answer basically come down to:
* If users are only expected to implement `IsolationHelper`, is it necessary for `Isolation` to be public?
* Should the user-facing API use generics or trait objects?
IIRC, we all generally agreed about these points:
* It would be better not to expose more traits than necessary.
* It would be good if we do not prevent ourselves, down the road, from making an accessor to get retrieve a stream's isolation setting.
* Adding generic parameters to `StreamPrefs` could be at least a bit confusing...
* ... but making users think about `dyn Isolation` could *also* be at least a bit confusing.
* We should make sure that users can understand what isolation is, how it works, and how to use it.
* ... but we shouldn't make users think they *need* more knowledge than they really do.
We didn't all agree on these points:
* Whether generic parameters on `StreamPrefs` or trait objects are *more* confusing.
* Whether it's important to specify how isolation works "mathematically", and therefore wise to expose the `Isolation` trait.
* Whether exposing the `Isolation` trait will confuse users who might otherwise have been happy with `IsolationHelper`.
* Whether we might someday want to have `Isolation` objects that aren't also `IsolationHelper`.
* Whether it is more confusing to let the user know that there is downcasting doing on behind the scenes, or more confusing to make the user wonder how `Isolation` actually works.
* Whether to rename `Isolation` to `AbstractIsolation` and `IsolationHelper` to `Isolation`, or something like that.
* How important it is to be able to pass a pre-existing `Box<dyn Isolation>` as the isolation for a new stream.
----
Here are the designs that we thought about:
### proposal A
```
pub struct StreamPrefs {
iso: Box<dyn Isolation>,
}
impl StreamPrefs {
fn set_isolation<T: Isolation>(&mut self, iso: T);
// or maybe
fn set_isolation(&mut self, iso: Box<dyn Isolation>);
// (or maybe both)
fn get_isolation(&self) -> &dyn Isolation;
}
// hypothetically later on
impl DataStream {
fn get_isolation(&self) -> &dyn Isolation;
}
```
This option avoided (most) generic parameters in the API, at the expense of more generic parameters.
This option is the closest to what we actually did. We refined the API in !418 to
```
fn set_isolation<T>(&mut self, iso: T)
where T: Into<Box<dyn Isolation>>
```
### proposal B
```
pub struct StreamPrefs<T: Isolation = IsolationToken> {
iso: T,
}
impl StreamPrefs {
fn set_isolation(&mut self, iso: T);
fn get_isolation(&self) -> &T;
}
struct TorClient<R> {
... stream_prefs: StreamPrefs<IsolationToken> ...
}
impl<R> TorClient<R> {
fn set_prefs(&self, new_prefs: StreamPrefs<IsolationToken>) {...}
fn connect_with_prefs<TA, T>(&self, target_addr: TA, prefs: StreamPrefs<T>) {...}
}
// hypothetically later on
struct DataStream {
// TorClient converts the StreamPrefs<T> into the Box<dyn Isolation>
// when the stream is created
isolation: Box<dyn Isolation>
}
impl DataStream {
// does a downcast internally
fn get_isolation<T: Isolation>(&self) -> Option<&T>;
}
```
This option avoided trait objects in the API, at the expense of more generic parameters.
### proposal C
```
pub struct StreamPrefs {
iso: dyn Isolation,
}
impl StreamPrefs {
fn set_isolation<T: Isolation>(&mut self, iso: T);
fn get_isolation(&self) -> IsolationRef;
}
struct IsolationRef (
Box<dyn Isolation>
);
impl Isolation for IsolationRef {..}
```
Nobody but @nickm liked this one. The idea was to introduce an IsolationRef to hide the existence of `dyn`.
### Proposal D
```
// Name subject to bikeshed. IsolationBase = Isolation from A.
struct IsolationRef(Box<dyn IsolationBase>);
impl<T: Isolation> From<T> for IsolationRef { ... }
impl IsolationRef {
fn downcast(&self) -> Option<&T>;
}
impl StreamPrefs {
fn set_isolation(&mut self, iso: IsolationRef);
fn get_isolation(&self) -> &IsolationRef;
}
// hypothetically later on
struct DataStream {
isolation: IsolationRef
}
impl DataStream {
fn get_isolation(&self) -> &IsolationRef;
}
```
----
All of the proposals above were disliked by at least one person, except for "Proposal B" which we disliked by two people.Arti 1.0.0: Ready for production usehttps://gitlab.torproject.org/tpo/core/arti/-/issues/408CLI: unify, streamline, and refactor listener code2022-12-18T21:03:06ZNick MathewsonCLI: unify, streamline, and refactor listener codeRight now we have two kinds of listeners: DNS and SOCKS. We should consider simplifying the logic that creates them a lot.
Some goals are:
* [ ] Eliminate duplicate code.
* [ ] Allow multiple listener ports of the same type.
* [ ...Right now we have two kinds of listeners: DNS and SOCKS. We should consider simplifying the logic that creates them a lot.
Some goals are:
* [ ] Eliminate duplicate code.
* [ ] Allow multiple listener ports of the same type.
* [ ] Allow listening on non-localhost addresses
* [ ] Fail with an error if the port binding fails for some reason other than "we don't support that address family."Arti 1.0.0: Ready for production useIan Jacksoniwj@torproject.orgIan Jacksoniwj@torproject.orghttps://gitlab.torproject.org/tpo/core/arti/-/issues/402If we have no consensus, save any well-signed consensus we're given?2022-05-23T17:34:44ZNick MathewsonIf we have no consensus, save any well-signed consensus we're given?When starting with its clock "a bit in the future" with no existing consensus, Arti will fetch the latest consenus... then throw it away because it is not new enough, and try it again.
That's silly: even if the consensus is not usable, ...When starting with its clock "a bit in the future" with no existing consensus, Arti will fetch the latest consenus... then throw it away because it is not new enough, and try it again.
That's silly: even if the consensus is not usable, it is better than no consensus at all, since we could save it and use it for applying consensus diffs. It would also help us pick a better `If-Modified-Since` for future consensus requests.
Found while investigating #329. See also #401.Arti 1.0.0: Ready for production usehttps://gitlab.torproject.org/tpo/core/arti/-/issues/401A well-signed consensus from the future indicates that our clock is wrong.2022-05-23T17:34:45ZNick MathewsonA well-signed consensus from the future indicates that our clock is wrong.Currently arti behaves badly and downloads too much if it is running with its clock set too far in the past.
If we are served a consensus from the future, I think that right now we won't even try to get its certs and validate it. But m...Currently arti behaves badly and downloads too much if it is running with its clock set too far in the past.
If we are served a consensus from the future, I think that right now we won't even try to get its certs and validate it. But maybe we should; if it turns out that there _is_ a valid consensus from the future, then maybe we should take that as a sign that our clock is wrong, and warn the caller/user accordingly.
Found while investigating #329.Arti 1.0.0: Ready for production usehttps://gitlab.torproject.org/tpo/core/arti/-/issues/390Stream directory downloads to reduce latency and save RAM2023-02-27T11:59:57ZNick MathewsonStream directory downloads to reduce latency and save RAMHere's a tricky one, but it has the potential to save time and memory.
Right now, the `tor-dirclient` API downloads the entire requested object to RAM, decompressing it as we go. That's okay for stuff like consensus documents, where we...Here's a tricky one, but it has the potential to save time and memory.
Right now, the `tor-dirclient` API downloads the entire requested object to RAM, decompressing it as we go. That's okay for stuff like consensus documents, where we will need the whole thing decompressed anyway, but it's less good for stuff like microdescriptors, where we'd like to handle each one as soon as we receive it, and we get a lot of them in a single document. This means that we're keeping like 10MB of temporary string data around when all but the most recent 3-4k is totally parsable.
It's also bad for latency, since we can be in a position where the information we would need to become bootstrapped is sitting in a download buffer, waiting for the download to complete.
We could save intermediate memory and latency by refactoring our downloader code to (optionally) return a bytestream of downloaded information, and then to write code to convert that bytestream into a `Stream` of `Microdesc` or `AuthCert`.
This would require significant refactoring in bootstrap.rs.
Found while doing #87
----
*Edited to add*: One caveat here. Many prefixes of a microdescriptor are themselves valid microdescriptors. Thus, when parsing a stream of microdescriptors, you can't safely parse the last one until the stream is finished.
----
*Edited to add*: Another application of this approach: we have some interest in being able to reject consensus documents _early_ if their first 1k describes a consensus we wouldn't use.Arti 1.0.0: Ready for production usehttps://gitlab.torproject.org/tpo/core/arti/-/issues/389Change Store::microdescs to return an Iterator? Or to parse incrementally?2022-05-23T17:44:26ZNick MathewsonChange Store::microdescs to return an Iterator? Or to parse incrementally?The `microdescs` method in SqliteStore returns a string-to-descriptor map; that means that it copies a whole pile of data out of the database before that data is parsed. It would be better for memory usage if we parsed the data as we re...The `microdescs` method in SqliteStore returns a string-to-descriptor map; that means that it copies a whole pile of data out of the database before that data is parsed. It would be better for memory usage if we parsed the data as we read it, rather than allocating megabytes at a time.
(Showed up in profiles for #87)
I can think of a few ways to do this:
* we could have `microdescs()` return an iterator of (MdDigest, String).
* we could call `microdescs()` with smaller inputs.
* we could parse the results of `microdescs()` as we go along, perhaps by having it take a closure that tells it how to interpret its strings?Arti 1.0.0: Ready for production use