[darcs-users] darcs patch: add exception to haskell_policy.sh for D... (and 61 more)

Jason Dagit dagit at codersbase.com
Fri Oct 31 03:26:37 UTC 2008


On Thu, Oct 30, 2008 at 2:20 PM, David Roundy <droundy at darcs.net> wrote:
> On Wed, Oct 29, 2008 at 10:30:12AM -0700, Jason Dagit wrote:
>> On Wed, Oct 29, 2008 at 7:52 AM, David Roundy <droundy at darcs.net> wrote:
>> > On Wed, Oct 29, 2008 at 01:00:08PM +0000, Eric Kow wrote:
>> >> As I mentioned earlier, this is the promised Salvo 8 (the big one),
>> >> which I have verified to pass tests on both GHC 6.6 and 6.8.
>> >>
>> >> If it helps, here is my contribution to the review.
>> >>
>> >> I think once we get over this hump, the merging the rest of sprint
>> >> patches in (or not) is easy
>> >>
>> >> Also, if there are any changes to be made, I think we should make them
>> >> *on top* of this bundle because amend-recording is simply not going to
>> >> be practical (I don't meant to belabour the point, it's just that
>> >> amending is a bit ingrained in our culture, so people get a bit nervous
>> >> when there is a large amount of work that wants to get in)
>> >
>> > I don't think I'll have time to review this bundle today, but had a
>> > question I'd like to ask even before I review it.  I presume this has
>> > been benchmarked against the pre-salvo-8 version.  How does it affect
>> > timings? Not that timings are everything, but it'd be nice to see if
>> > it speeds things up or (although it seems unlikely) slows things
>> > down, and if so, by how much.
>>
>> I want to see benchmarks too, but I thought I would justify why we
>> expect this to be no slower than the previous code...Everything below
>> is stuff that we discussed during the Sprint.
>>
>> For every thing that Don rewrote (like removing the C code) he checked
>> that the code generated by GHC after the change was at least as good
>> as the code generated before it.  He said it looked like the generated
>> code was either same or better in each case.
>
> Was he aware that the bytestring code started out significantly slower than
> the OldFastPackedstring code? Or did he run any timings?

Don has assured me that he started the BS implementation as a spin off
of what was in darcs as fast packed strings.  He also assures me that
with every release he does extensive performance checks to make sure
that the performance of BS is monotonically improving.  Basically,
it's not reasonable to expect BS to be slower than OldFastPackedString
for all the things that BS implements.

> I suspect there
> are significant regressions from the OldFastPackedstring code mixed with
> major improvements elsewhere.  I'd rather not add these significant
> regressions to the repository, but since the first thing he did was to
> remove the older, faster code, it's very hard to track them down.  It's
> very frustrating...

We have no reason to believe that the indirection layer we added to
make the BS api compatible with OldFastPackedStrings was efficient.
Actually, since that was never really profiled (as far as I know) it
makes sense that that layer of indirection is a cause of slow down.  I
guess I'm arguing that the regression is already in darcs and it's
what you called "old" in your timings.

To me the real test is OldFastPackedStrings, using it's native API
versus Data.ByteString using it's native API.  We have that test
though, it's "fps" versus "new".  The outcome seems to be in favor of
Data.ByteString.  I struggle to see what value "old" adds to this
discussion.

Thanks,
Jason
PS David, I think there may be some messages in this thread that were
directed at you but not addressed directly to you.  I'm not sure if
you'll see them or not.


More information about the darcs-users mailing list