What Makes sRGB a Special Color Space?

Author: todsacerdoti

Score: 68

Comments: 29

Date: 2021-12-02 03:23:59

Web Link

________________________________________________________________________________

dzdt wrote at 2021-12-02 12:38:58:

The challenge with color spaces is that the fundamental goal is to reproduce human perceptions not a physical quantity. Color spaces are about human biology, physiology and psychology, not about physics.

To a first approximation, humans perceive color on the basis of three different kinds of light sensitive "cone" cells. The three varieties are most sensitive to red, green, and blue light.

An image with control over intensities of a red, green, and blue component of emitted light can match the stimulation of cones over some range ("gamut") of colors a person could perceive. Orange wavelengths of light are between the wavelengths of a pure red and pure green, and a mixture of red and green light will be perceived as orange.

The only reason the RGB approximation to color works is because of how the human eye works. But it misses a bunch about how the eye works! Cones are not just sensitive to a single color; they have a response curve over a whole range of wavelengths. This means there are pure colors (wavelengths) which produce an eye response outside the span of what can be produced mixing pure red, green, blue light signals.

There are other dimensions of complexity to the human biology component: there is a 4th kind of light sensitive cell (rods) that contribute mostly to low-light vision. Some people have fewer kinds of cone cells (color-blindness) or more (tetrachromacy[1]).

Eyes are also sensitive to polarization of light [2], which as far as I know has still not entered the display-space discussion.

So the color space challenge is about reducing complex infinite-dimensional reality to a small dimension approximation based on limitations of human biology, but using incomplete flawed approximations of that biology.

[1]

https://www.optimax.co.uk/blog/tetrachromacy-superhuman-visi...

[2]

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4528539/

spiralx wrote at 2021-12-02 12:46:57:

A great video demonstrating the strangeness of representing colours is this video on the colour brown:

https://www.youtube.com/watch?v=wh4aWZRtTwU

Colour theory can get pretty complicated, the video is however very understandable as well as interesting :)

scottlamb wrote at 2021-12-02 04:52:23:

There was a lot of hand-waving around the fact I was writing about the sRGB spec without having seen it, and a lot of the content in my post on that topic was inferred from information on Wikipedia and in the draft spec for sRGB.

What a ridiculous situation and a great example of why specs should be open.

(Speaking of which: anyone know how to get a free, legal copy of any revision of ISO/IEC 14496-15? I've had luck with some of its close friends, but not that one.)

lstamour wrote at 2021-12-02 08:54:43:

It’s been my informal experience that if a spec is available for cheap from

https://www.evs.ee/en/

then it’s often somewhat hard to find, but in this case 14496-15 is only listed in it’s more expensive variant and there’s no cheap one available.

When that happens, I can often find a copy of the spec by hunting down zip files of drafts attached to meeting records online or I can find a draft copy of the spec after a simple registration form. I can’t speak to this one for the 2019 latest version or newer, but I did find out what might be some earlier versions at the bottom of

https://mpeg.chiariglione.org/standards/mpeg-4/carriage-nal-...

for example. Or for newer work-in-progress:

https://www.mpegstandards.org/standards/MPEG-4/15/

No promises that anything found this way will be used/useful yet.

scottlamb wrote at 2021-12-02 19:34:26:

I've looked at that mpeg.chariaglione.org URL you linked (many times) but it doesn't seem to have the full standard (even a draft), just amendments and very target corrections.

Thanks for the other two links; I don't see anything useful for this one, but maybe they'll help next time...

lstamour wrote at 2021-12-02 22:19:49:

How about this?

https://mpeg.chiariglione.org/standards/mpeg-4/carriage-nal-...

Seemed relatively complete to me though I don’t have access to source docs to confirm.

scottlamb wrote at 2021-12-02 22:56:42:

That's an amendment. I guess it's better than nothing, though: it has the definition of AVCDecoderConfigurationRecord (because it extends it).

adgjlsfhk1 wrote at 2021-12-02 06:51:22:

The one that I find especially ridiculous is IEEE 754. How do they expect languages to have correct math libraries if they don't publish what "correct" means?

zokier wrote at 2021-12-02 12:01:58:

IEEE 754 is published and publicly available for anyone to purchase. It's not freely distributed, but that's far cry from unpublished work or something available only for members of the cabal.

e-dt wrote at 2021-12-03 04:21:56:

http://ipfs.io/ipfs/bafykbzacebqg3j6dzlg4srgyrfdksroopamjruu...

Scene_Cast2 wrote at 2021-12-02 06:51:10:

As a hobbyist, one of the issues I have with sRGB and color spaces in general is that debugging color spaces issues is difficult. For example, trying to figure out why images on Google Photos in Chrome look "off", and how to get the images look consistent between platforms (browser and OS) is frustrating.

rocqua wrote at 2021-12-02 09:18:56:

It is inherently a difficult process.

Firstly, true references are very hard to come by. That makes debugging hard already.

Secondly, color is not just a property of light. The same lamp, with the same spectrum of emitted light can appear to have a different color to you depending on the surroundings. During daylight a lamp might look yellow-white, but when looking at the same lamp at night when you have your normal lamps on in your home, it could look blue. That is because your eyes 'adjust' how they see color depending on ambient conditions. This way your eyes always see a white sheet of paper as white. Instead of it looking yellow under warm light, or blue under very cold light.

Things are even worse when considering the color of normal objects. The 'emitted spectrum' of an object depends on the spectrum of light that falls on the object.

Take two light sources. One being 'wide spectrum' and white the other being a mix of a few wavelengths also looking white. Two objects could appear to have the same color under one lamp, but look different under the other lamp.

And the above situation is also still affected by the general color of your surroundings.

Hence it is quite hard to even say what 'color' an object has. Or even what color a light source has. Generally the approach is to standardize. For light that means specifying the 'color temperature' (and tint) of your ambient light. To ensure your eyes are adjusted to a known light situation.

For the color of objects, if attempts are made to specify a color, it is specified under a specific broad-spectrum light source. But I believe generally they are defined as 'this mix of pigments'.

This means that for objects, it is essentially impossible to take the color of an object and make a second object that has the same color, unless you know how the first object got its color.

ISL wrote at 2021-12-02 07:02:37:

Color is really, really hard.

I love the clarity of black-and-white imaging, but the thing that keeps me coming back to it more than anything else is the ability to successfully render my intended image on more electronic platforms.

When working in color, I either gravitate toward images with unambiguous color or accept the fact that only those with calibrated screens will see what I intended to create. Moreover, those calibrated screens need to be calibrated to the _same reference_ I'm using, or they'll see something different.

Even with prints, without controlling the viewer's ambient light, you'll also get different perception. It is really hard.

Agreed that debugging is hard, too. I've recently come across the notion of 'canary images' [1] , which can make some of the debugging effort into binary search.

[1]

https://instagram-engineering.com/bringing-wide-color-to-ins...

mnahkies wrote at 2021-12-02 15:14:07:

I've seen landscape paintings done in such a way that they change based on the illumination, it was a very impressive effect.

There's an article about the guy here

https://thisnzlife.co.nz/queenstown-painter-tim-wilson-celeb...

relevant quote:

> I use a lot of translucent, transparent and interference pigments and when the light levels change, the underpainting punches through

JKCalhoun wrote at 2021-12-02 14:29:33:

Apple introduced their ColorSync technology a couple of decades ago. (Early in my career I worked on the ColorSync team.)

Apple relied on ICC profiles (International Color Consortium?) to define the color characteristics of each device in a color workflow. You would not modify the bytes of the image source as it passed through your pipeline, merely rendered it differently using the ICC profiles and math.

A computer display generally has a much wider, richer color gamut than an inkjet printer. Naturally an image rich in color can look better on the display than when printed.

When going from a wide gamut of the source image to a much smaller gamut you have some choices as to whether you want to simply clip the outlying colors and therefore "mute" your dynamic color range or whether you want to scale down the entire gamut of the source image to try to fit it within your destination gamut; or some other compromise (called "rendering intent" in the ColorSync jargon).

There were other issues like color temperature and issues specific to specific devices: white-point, black-point, etc.

A nice thing about this workflow was of course that the source image was unmodified across the pipeline, rendered as best it could be for the specific output (with the specified intent). The math worked well enough in fact that you could map the source image to your printer profile, and then map those new values to your display and essentially "proof" the image: see what the image is likely to look like when the gamut is restricted by the print device.

When Apple and other companies started introducing wide-gamut displays this was a very exciting development since it kicked open the gamut of our displays and a properly color-managed OS could, for example, really show you some of the rich reds the cameras had captured (in RAW of course) that previous display technologies could not.

Funnily, a non-managed color workflow on the same display would look "off" as things like the red close buttons in the window corners looked blazingly red, painfully red almost. Specifying RGB as (1.0, 0.0, 0.0) for red was no longer going to look right, now needed to be pulled in to a gamut we were more used to using something like (0.9, 0.0, 0.0). (Software of course would naively still specify (1.0, 0.0, 0.0) but I believe when the window backing store then rendered those pixels to the display it did the color matching that pulled the red down — if you wanted to display a RAW image you would assign that camera's ICC profile to the backing store to prevent the default match.)

The downside of the ColorSync model was of course every device had to be calibrated. Displays were fairly trivial but printers, as you probably know, needed a different profile for not only different inks but also different papers as the same magenta ink will look different on coated versus non-coated paper. And printers were particularly bad color devices because there were no good mathematical models using power-curves and black-body temperatures the way a display could be modeled, instead large 3D tables of color values, printed and carefully measured and later interpolated between were needed for each ink + paper combination.

sRGB was something I remember Microsoft getting behind. The model was, I was explained, to define the color space that 8-but RGB values were supposed to conform to. So if you could dial in your display to display sRGB, your printer to print sRGB, then no color management is really needed — WYSIWYG.

The downside of course was that sRGB was a narrow gamut that gimped displays capabilities and yet I suspect was outside the capability of many printers.

Adobe only slightly remediated that by creating AdobeRGB that followed the same idea, just kicked open the gamut a bit more.

Anyway, that's the way I remember/understood it.

aikinai wrote at 2021-12-03 01:07:02:

Your mention of blazingly red close buttons immediately brought to mind how the iPad looks when using SideCar. I've tried to use it many times, but can't stand the way MacOS renders color on the iPad, including eye-burning colors in the traffic light buttons.

Is this what causes the issue? Does the Mac naively send high saturation colors to the iPad screen that has a wider gamut maybe? I've tried changing the color profile on the iPad, but SideCar doesn't support that, so it's stuck looking terrible and I never used SideCar for real.

Now I'm just waiting for Universal Control since then the iPad will be managing its own color.

JKCalhoun wrote at 2021-12-03 14:42:16:

It is conceivable to me (but I have no first-hand knowledge) that the same color-matching for backing-stores is not in play in the iOS window server.

rocqua wrote at 2021-12-02 10:08:38:

I wonder how the canary image interacts with 'rendering intent'. I could well imagine 'relative colorimetric' or 'perceptual' methods of converting to sRGB color space essentially scaling down how red the square is to allow showing the embedded logo in a redder color than the rest of the square.

fxtentacle wrote at 2021-12-02 09:40:49:

X-rite for example sells color test cards, verified color PNGs and color measurement meters. You can use those to make sure the color matches up from paper to scanned file to screen. The opposite direction is more difficult, because most screens can display colors that you can't print with CMYK.

hulitu wrote at 2021-12-03 16:40:51:

It's so bad that is special. When i see in a monitor spec sRGB i learned to not expect much (i.e 24 bit colors which look like 16bit)

fxtentacle wrote at 2021-12-02 09:05:31:

In my opinion, the main specialty of sRGB is its nonlinear gamma (to emulate perceptual brightness), while RGB, XYZ, and LAB have linear gamma (to match photon intensity).

Accordingly, the conversion formulas given in the article will be wrong for 8bit sRGB images like they are usually used in JPEG or PNG, because they lack the gamma conversion.

Dave_Rosenthal wrote at 2021-12-02 15:34:57:

As sibling comments are pointing out, yes, any non-1.0 gamma (i.e. intensity^gamma) is a non-linear transformation, but what I think you are trying to point out instead (and is weird/unique) is that sRGB has a "non-constant gamma". Or, said another way, the non-linear intensity transformation cannot be represented by a simple gamma value. The transformation is, in fact, piecewise, with a linear transformation in the region from 0 to [some low intensity] and a more traditional gamma-like curve from [some low intensity] to 1.0.

(In practice this curve is a pain, so many systems over the years have just settled to approximate it as a 2.2 gamma curve.)

formerly_proven wrote at 2021-12-02 09:25:36:

Gamma is by definition non-linear so a CSC matrix can never account for it. Pretty much all color spaces intended for content delivery use some kind of gamma correction because as you say brightness is perceived on a log-scale and you don't want to waste any bits on stuff nobody is going to perceive.

fxtentacle wrote at 2021-12-02 09:41:58:

I agree with you, but I see that as an omission in the article that they treat sRGB like RGB when one is logarithmic and the other is linear gamma.

zokier wrote at 2021-12-02 10:48:26:

"RGB" does not really specify any particular colorspace, s I don't know if it could really be said to have any well-defined gamma, linear or not.

fxtentacle wrote at 2021-12-03 10:17:31:

Agree again, but the article assumes linear gamma for RGB. Also if I remember correctly, the CIE-RGB is specified to be linear as to be compatible to CIE-XYZ. Since the article omitted the CIE prefix on XYZ, I believe they also meant to refer to the CIE RGB variant. Or at least, that's where that conversation formula came from.

pezezin wrote at 2021-12-02 12:50:36:

CIELAB is non-linear, it has a gamma of 3 (a cubic root actually):

https://en.wikipedia.org/wiki/CIELAB_color_space#Converting_...

CarVac wrote at 2021-12-02 11:36:19:

The conversion formulas are applied after you linearize though…

AndrewSwift wrote at 2021-12-03 08:26:02:

I don't have an opinion on the technical merits of the sRGB spec, but I do know fom experience that it's really limiting.

Modern screens are capable of an enormous range of colors that just aren't visible on the internet because of the sRGB standard.

Visit

https://svija.love

in Safari on a modern Mac, and you can see what Display P3 color, or ProPhoto RGB color can do.

Honestly, I will never forget the day a couple of years ago that I first saw what my monitor was actually capable of. I had been building websites in sRGB since 1995 and I was just blown away.

The sooner the new color profiles are widely adopted the better — I for one can't wait.