[darcs-users] [patch260] Implement darcs optimize --http (and 2 more)
me at mornfall.net
Fri Jun 25 05:24:44 UTC 2010
one more thing occurred to me...
Alexey Levan <bugs at darcs.net> writes:
> +doOptimizeHTTP :: IO ()
> +doOptimizeHTTP = do
> + rf <- either fail return =<< identifyRepoFormat "."
> + unless (formatHas HashedInventory rf) . fail $
> + "Unsupported repository format:\n" ++
> + " only hashed repositories can be optimized for HTTP"
> + createDirectoryIfMissing False packsDir
> + let i = darcsdir </> "hashed_inventory"
> + is <- dirContents "inventories"
> + pr <- dirContents "pristine.hashed"
> + BL.writeFile (packsDir </> "basic.tar.gz") . compress . write =<<
> + mapM fileEntry' (i : (is ++ pr))
> + ps <- dirContents' "patches" $ \x -> all (x /=) ["unrevert", "pending",
> + "pending.tentative"]
> + BL.writeFile (packsDir </> "patches.tar.gz") . compress . write =<<
> + mapM fileEntry' ps
Writing the tarballs like this can disrupt gets that were middle of
getting them: you could end up with messed up tarball. You may need to
write them atomically -- write in a temporary file and move over the
existing one: there's some functionality in Darcs.Lock to achieve that,
although it may need to be extended to cover lazy ByteStrings.
Presumably, it does not matter if you get different versions of the
basic tarball and the patches tarball: the extra patches just get
pre-cached for the subsequent pull. No big deal...
Please don't amend this patch though, I'll take it as it is -- just make
sure to post a followup (on a new patch ticket, even).
> + where
> + packsDir = darcsdir </> "packs"
> + fileEntry' x = do
> + content <- BL.fromChunks . return <$> gzReadFilePS x
> + tp <- either fail return $ toTarPath False x
> + return $ fileEntry tp content
> + dirContents d = dirContents' d $ const True
> + dirContents' d f = map ((darcsdir </> d) </>) . filter (\x ->
> + head x /= '.' && f x) <$> getDirectoryContents (darcsdir </> d)
More information about the darcs-users