💾 Archived View for dioskouroi.xyz › thread › 29450123 captured on 2021-12-05 at 23:47:19. Gemini links have been rewritten to link to archived content

View Raw

More Information

-=-=-=-=-=-=-

Creating a Solaris 10 zone on OpenIndiana

Author: zdw

Score: 30

Comments: 23

Date: 2021-12-05 15:51:23

Web Link

________________________________________________________________________________

hereforphone wrote at 2021-12-05 17:56:24:

This is a good opportunity to discuss Solaris in general. I've used Solaris for fun in the past. I couldn't get hooked on it. It seemed overly-complex and not very user friendly. I'm a very long time Linux user and I use BSD as well.

What were the primary use cases for Solaris in its prime? Is it still used anywhere now?

Bluecobra wrote at 2021-12-05 18:41:35:

> What were the primary use cases for Solaris in its prime?

I used to administer many Solaris hosts in the past and they were super reliable for hosting enterprise services like mail, NFS, NIS, etc. Kernel panics were pretty rare. One really nice feature is that there’s no concept of out of memory killer. I remember one day our LDAP host went haywire and the load average shot up to something like 800. I was able to still get in as root and kill the offending processes.

I have fond memories of Solaris 10, including ZFS, dtrace, zones, SMF. The X4500 “Thumper” storage appliances were pretty bad ass back in the day.

Wowfunhappy wrote at 2021-12-05 18:53:48:

> One really nice feature is that there’s no concept of out of memory killer. I remember one day our LDAP host went haywire and the load average shot up to something like 800. I was able to still get in as root and kill the offending processes.

What does it do when it’s out of memory?

throw0101a wrote at 2021-12-05 19:14:22:

Solaris didn't (still doesn't?) allow processes to allocate memory when none was available. If a program called _malloc()_ and RAM+swap was full, the call returned an error (and ideally the program handled it). IIRC, the BSDs are the same way as well by default.

Linux is the main Unix-y system that allows over-allocation.

1vuio0pswjnm7 wrote at 2021-12-05 20:40:49:

Funny, I had the opposite question when I started using Linux more often. NetBSD is like Solaris. Running out of memory is not a big deal. I use RAM for the filesystem, so it does happen occassionally. On Linux, the effect of running out of memory was quite a surprise.

iso1631 wrote at 2021-12-05 19:55:24:

> The X4500 “Thumper” storage appliances were pretty bad ass back in the day.

When you dropped it on your toe

I think the instructions suggested a 4 person (5?) lift into the bay. Even with a hydraulic trolley to get it to the right level we used three people.

throwawayboise wrote at 2021-12-05 18:55:18:

Also databases. If you were running Oracle in those days, it was almost certainly on Solaris.

znpy wrote at 2021-12-05 21:30:47:

> One really nice feature is that there’s no concept of out of memory killer.

Interesting... So what happened if you ran out of memory ?

iso1210 wrote at 2021-12-05 18:04:26:

SPARC hardawre was very good in the 90s and early 00s, and Solaris was a solid unix, unlike that upstart linux. Solaris came out with ZFS and zones, where were both way ahead of anything in the linux ecosystem, however it's restrictive licensing meant it generally failed to survive the explosion in growth from the cloud boom.

hereforphone wrote at 2021-12-05 18:06:15:

I had some SPARC hardware when it was already old, that I got off of eBay. Tons of hackers did in the 90s/00s. What was "very good" about the hardware? That it was RISC?

inkyoto wrote at 2021-12-06 02:15:21:

> What was "very good" about the hardware?

The switched memory architecture. Which was a boon to running Oracle and transaction processing monitors (e.g. Tuxedo) – or, anything that required large amounts of memory and the high memory bandwitdth, but did not require the highest performant CPU.

Also, an extensive network of partners and resellers that made it easy to purchase a SPARC system, service and upgrade it was also a major factor. If one had the money, it was just too easy to buy a SPARC box.

> That it was RISC?

That is where Sun was consistenly lagging behind other RISC vendors until (roughly) the arrival of the UltraSPARC-II CPU. Before then, Alpha, POWER and PA-RISC CPU's consistently outperformed current SPARC / UltraSPARC-I designs. Another part of the problem was that the highly optimising Sun C/C++ compiler was very expensive, and GCC could not generate the code that would be as efficient for quite a while – not until after GCC folks rewrote the register allocator (GCC v2.95 circa?). The problem partially stemmed from the SPARC's windowed register file design. GCC has eventually caught up and has been generating fast (Ultra-)SPARC code since then.

iso1210 wrote at 2021-12-05 18:19:01:

64 bit processor in 1995. AMD64 wasn't released until 2000. From memory Sun/Solaris kit supported far more memory than anything X86 based.

throw0101a wrote at 2021-12-05 19:17:04:

And multi-CPU/socket. The SPARCcenter 2000/2000E had 20 sockets in ~1995:

*

https://en.wikipedia.org/wiki/Sun4d#SPARCcenter_2000

tw04 wrote at 2021-12-05 19:44:36:

Just now Intel is finally starting to get some of the RAS features sparc + Solaris had back in the 90s. Sun Sparc boxes were far more robust to hardware failure/recovery than their x86 counterparts from what I remember.

https://docs.oracle.com/cd/E19620-01/805-4453/6j47hg5qf/inde...

fnord77 wrote at 2021-12-05 18:20:43:

It had ECC memory and networking built in. which at the time I think was hard to find on PC hardware. And the networking just worked. I remember trying to get ethernet to work on a PC with early linux. Nightmare.

tonoto wrote at 2021-12-05 19:00:56:

I don't know it current position in the market as I haven't worked professionally with Solaris for a couple of years, but the trend is on decline and I have trouble to see its role in new establishments, at least Oracle Solaris. One of the frequent use cases was in the finance sector, I believe due to stability, observability and security model among some of the deciding factors.

I still use the open source variant at home (illumos), I find it terrific as a hypervisor.

throw0101a wrote at 2021-12-05 19:11:35:

Telco as well: Sun sold NEBS-certified servers. Low fire/smoke propagation in components and -48V power supplies variants were available for just about any 'standard' model Sun offered.

*

https://en.wikipedia.org/wiki/Network_Equipment-Building_Sys...

coredog64 wrote at 2021-12-05 18:44:29:

What was very helpful about Solaris was that the C library API was stable over the entire run. If you had some grotty old binary that was running on 2.5 or 2.6, you wouldn’t have to worry about running it on 8 or 9.

pjmlp wrote at 2021-12-05 21:55:37:

Besides the sibling replies, Solaris on SPARC ADI is one of the few OSes that has tamed C, given that it makes use of hardware memory tagging for the complete OS.

vondur wrote at 2021-12-05 18:42:16:

Didn’t the Unix workstations of the 90s have more memory bandwidth compared to PCs and Mac is the time? Also 3d graphics were probably better.

throwawayboise wrote at 2021-12-05 19:36:41:

Yes, that was the main difference between a "workstation" and a PC.

aphrax wrote at 2021-12-05 20:10:30:

I remember attending the Open Indiana ‘launch’ after Oracle dumped OpenSolaris. I loved both systems, they were actually nice for desktop use. I think these days the resources/interest just isn’t there. Would be great to wrong on this…

Up_From_Here wrote at 2021-12-05 17:42:53:

I wish I saw more illumos out in the open. The "server UNIX" space needs more competition, more unique approaches and solutions to the problems in our space.