There may be some problems here with Python dependencies (the available, packaged, implementations of RSA, PKCS!#1, and ASN.1 aren't all that great, as noted in legacy/trac#5810 (closed)). I don't care what dependencies we add to get this to work; it's causing BridgeDB's new Stem-based parsers (legacy/trac#9380 (moved)) to choke during test runs on Leekspin's fake bridge descriptors.
Designs
Child items ...
Show closed items
Linked items 0
Link issues together to show that they're related.
Learn more.
In Stem's test suite, there are two checks. We generate a valid digest for one and mock out the other. I think what you want to look at is this. The tricky bit is the sign_descriptor_content()
I'd be ok with expanding this module to have the mocking.py functions for generating valid descriptor content if it would be helpful (or something more like what Leekspin does if that's better). I've been meaning to look at Leekspin at some point to see if its functionality belongs in Stem but haven't had the time.
There may be some problems here with Python dependencies (the available, packaged, implementations of RSA, PKCS!#1, and ASN.1 aren't all that great, as noted in legacy/trac#5810 (closed)). I don't care what dependencies we add to get this to work; it's causing BridgeDB's new Stem-based parsers (legacy/trac#9380 (moved)) to choke during test runs on Leekspin's fake bridge descriptors.
There may be some problems here with Python dependencies (the available, packaged, implementations of RSA, PKCS!#1, and ASN.1 aren't all that great, as noted in legacy/trac#5810 (closed)). I don't care what dependencies we add to get this to work; it's causing BridgeDB's new Stem-based parsers (legacy/trac#9380 (moved)) to choke during test runs on Leekspin's fake bridge descriptors.
BridgeDB's code still has some of the old stuff in it (legacy/trac#12505 (moved)) which is PEP8/stylistically horrifying. Also a lot of it is just plain horrifying in every way. Until those are fixed up, making an automated test with pylint and/or pep8 would just fail, which I fear would cause confusion for new contributors. But it's on my TODO list, once the cleanup is done, to add something of the sort in order to enforce consistency.
I'd be ok with expanding this module to have the mocking.py functions for generating valid descriptor content if it would be helpful (or something more like what Leekspin does if that's better).
I think Leekspin would only need stem.test.mocking.sign_descriptor_content().
I've been meaning to look at Leekspin at some point to see if its functionality belongs in Stem but haven't had the time.
Currently, Leekspin only generates unsanitised bridge descriptors. It's also on my TODO list to make it support relay descriptors. It might be useful if you want to test Stem's parsing of unsanitised descriptors, but obviously not until bugs like this one are fixed. :)
Until those are fixed up, making an automated test with pylint and/or pep8 would just fail, which I fear would cause confusion for new contributors.
Not necessarily. Stem was once in a similar boat. It took days of dedicated effort to shift to be mostly pep8 conformant, so what I did was overhauled Stem one issue at a time.
My suggestion would be to run pep8 over BridgeDB and simply note all the issue types it fails for. Then blacklist them all. This is done via a 'pep8.ignore' configuration like...
Now tests will pass and if you get a patch for anything that violates the parts of pep8 you already comply with then they'll get a heads up. After this you can shift BridgeDB to be pep8 conformant one issue at a time. :)
Currently, Leekspin only generates unsanitised bridge descriptors.
By 'unsanitised bridge descriptors' you mean regular server descriptors, right? If not then how do they differ?
An open question in my mind is 'How do Leekspin and Stem's mocking module differ? They both make test descriptor data, right?'. Maybe there's good reason for them to both continue independently - I just haven't sunk the time into looking Leekspin over yet.