• GamingChairModel@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    3 days ago

    This write-up is really, really good. I think about these concepts whenever people discuss astrophotography or other computation-heavy photography as being fake software generated images, when the reality of translating the sensor data with a graphical representation for the human eye (and all the quirks of human vision, especially around brightness and color) needs conscious decisions on how those charges or voltages on a sensor should be translated into a pixel on digital file.

    • XeroxCool@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      Same, especially because I’m a frequent sky-looker but have to prepare any ride-along that all we’re going to see by eye is pale fuzzy blobs. All my camera is going to show you tonight is pale sprindly clouds. I think it’s neat as hell I can use some $150 binoculars to find interstellar objects, but many people are bored by the lack of Hubble-quality sights on tap. Like… Yes, and then sent a telescope to space in order to get those images.

      That being said, I once had the opportunity to see the Orion nebula through a ~30" reflector at an Observatory, and damn. I got to eyeball about what my camera can do in a single frame with perfect tracking and settings.

  • JeeBaiChow@lemmy.world
    link
    fedilink
    English
    arrow-up
    58
    ·
    4 days ago

    Good read. Funny how I always thought the sensor read rgb, instead of simple light levels in a filter pattern.

    • _NetNomad@fedia.io
      link
      fedilink
      arrow-up
      19
      ·
      4 days ago

      wild how far technology has marched on and yet we’re still essentially using the same basic idea behind technicolor. but hey, if it works!

      • GamingChairModel@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 days ago

        Even the human eye basically follows the same principle. We have 3 types of cones, each sensitive to different portions of wavelength, and our visual cortex combines each cone cell’s single-dimensional inputs representing the intensity of light hitting that cell in its sensitivity range, from both eyes, plus the information from the color-blind rods, into a seamless single image.

    • Davel23@fedia.io
      link
      fedilink
      arrow-up
      12
      ·
      4 days ago

      For a while the best/fanciest digital cameras had three CCDs, one for each RGB color channel. I’m not sure if that’s still the case or if the color filter process is now good enough to replace it.

      • CookieOfFortune@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        3 days ago

        There are some sensors that have each color stacked vertically instead of using a Bayer filter. Don’t think they’re popular because the low light performance is worse.

      • lefty7283@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 days ago

        At least for astronomy, you just have one sensor (they’re all CMOS nowadays) and rotate out the RGB filters in front of it.

        • trolololol@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          3 days ago

          Is that the case for big ground and space telescopes too? I can imagine this could cause wobbling.

          Btw is that also how infrared and x-ray telescopes work as well?

          • lefty7283@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            3 days ago

            It sure is! The monochrome sensors are also great for narrowband imaging, where the filters let through one specific wavelength of light (like hydrogen alpha) which lets you do false color imaging.

            IR is basically the same. Here’s the page on JWST’s filters. No clue about xray scopes, but IIRC they don’t use any kind of traditional CMOS or CCD sensor.

      • worhui@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 days ago

        3chip cmos sensors are about 20-25 years out of date technology. Mosaic pattern sensors have eclipsed them on most imaging metrics.

  • tyler@programming.dev
    link
    fedilink
    English
    arrow-up
    32
    ·
    4 days ago

    This is why I don’t say I need to edit my photos, but instead I need to process them. Editing is clearly understood by the layperson as Photoshop and while they don’t understand processing necessarily, many people still understand taking photos to a store and getting them processed from the film to a photo they can give someone.

    • Fmstrat@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      As a former photographer back when digital was starting to become the default, I wish I had thought of this.

  • worhui@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    3 days ago

    Not sure how worth mentioning it is considering how good the overall write up is.

    Even though the human visual system has a non-linear perception of luminance. The camera data needs to be adjusted because the display has a non-linear response. The camera data is adjusted to make sure it appears linear to the displays face. So it’s display non-uniform response that is being corrected, not the human visual system.

    There is a bunch more that can be done and described.