< mildly infuriating

Parent

~madqubit

Much like SLURM. We use Broadcom (CA) workload directors. One server that acts as the scheduler then all of the workloads get pushed to the servers. Mainly Oracle’s suite of computation, Informatica, Dataserv. Etc. we have a lot of internal data processing as well but they’re really trying to get off of it and shoehorn into something else. It’s quite sad really. The in-house tools are so reliable and beautifully done on the backend. The front end… well. They were definitely made by a backend dev. It isn’t really pretty to look at but dang it everything is there and it makes logical sense on where everything is.

Spotty networks are the bane of my existence. Oh? This super important job failed because of a ping spike? Let me call an analyst at 3AM and deal with their grumpiness because of a reroute that caused latency to go up for a few seconds.

Only time something fails on the mainframe is if the analyst broke the code, forgot to update the JCL, or a resource like IMS is down for maintenance and they forgot to hold the job.

Write a reply

Replies

~tetris wrote:

We used the Sun Grid Engine in my old group before it got bought and destroyed by Oracle, and it was janky and ugly as hell -- but it worked, and everyone just needed a day or two to understand how it worked before submitting jobs to it.

Very small, easy to configure specific piece of software for a specific task. I look at new systems like Hadoop or whatever Amazon is doing these days, and I'm completely lost at how to interact with them.

(Do I need to create an account? Is it a local account? Does every job need to be registered in order to run? etc.)