[Replicant] Where to find camera image processing code?
rosko37
rosko37 at gmail.com
Tue Aug 1 21:24:43 UTC 2023
Hello all,
I've not been able to find the algorithmic part of the camera in Replicant
(or GrapheneOS, stock Android, etc.--none of them). I mean things like
HDR/HDR+, scene modes, combining views from multiple cameras, QR code
decoding, face detection, etc. Well, I can't find lower-level things like
debayering, white balance, auto exposure, etc. either, but I'm guessing
these may normally be implemented via fixed function hardware blocks on the
sensor chip, or in drivers for said chip, rather than in software. I'm
talking about the things that are high level/complex enough that they would
need to be implemented in regular software.
It seems from some reading that in stock Android this code is in libraries
called "camera2" in older versions and "CameraX" in newer ones, however any
code referencing these in Android open-source code seems to be mere
high-level boilerplate/"stubs" and there are no actual implementations. By
"implementations", I would expect to see some heavy for-loops over arrays,
edge/corner detectors, Haar cascades, etc. if it's done on the CPU, or some
pretty compute-heavy shaders if it's done on the GPU. I find it hard to
believe that such a substantial volume of code could be so easy to miss
unless I'm totally looking in the wrong place in the code, which is what I
suspect.
I suspect that in stock Android, especially on Google's own Pixel phones,
such functionality is closed source and therefore not part of the Android
open-source code. But for third-party Android-compatible OSes, I'm
wondering where to find this code. These all seem to HAVE a camera app. I
could envision several possibilities:
1. All high-level image processing code needs to be provided on a per-app
basis by the app developer, rather than having in-OS frameworks for it.
However, the camera apps for none of the Android alternatives seem to
contain such code, that I can see. They only contain high-level boilerplate
for the various functions that presumably call the actual implementations,
hence I assume this is NOT the path they have taken. Also this would mean
that apps that lean on OS-level implementations of these operations on
stock android couldn't run, which also makes this seem like a less likely
choice.
2. These Android-compatible distributions provide their own implementations
of libraries that provide this functionality, that possibly tie in with
their implementations of rendering and media processing more generally.
This seems to me the most likely route, as it provides a completely free
codebase while presenting an API to apps that mimics stock Android as
closely as possible. However, I can't find this code.
3. The non-stock distros bundle stock implementations of this functionality
as closed-source binary blobs, or require updating from a version of stock
Android that has the necessary libraries included, and only replace some of
the existing OS (though I suspect that the latter would raise significant
legal issues).
I'm curious, because to me one of the biggest advantages of a fully
open-source phone OS is the ability to tinker with such algorithms and
implement custom versions rather than relying on some kind of "secret
sauce" provided by a phone manufacturer.
-Andrew Rosko
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.osuosl.org/pipermail/replicant/attachments/20230801/a22553b8/attachment.html>
More information about the Replicant
mailing list