š¾ Archived View for dioskouroi.xyz āŗ thread āŗ 29429385 captured on 2021-12-05 at 23:47:19. Gemini links have been rewritten to link to archived content
ā¬ ļø Previous capture (2021-12-04)
-=-=-=-=-=-=-
________________________________________________________________________________
And if you think stairbuilding is complex, you should look into handrailing [0]
The money quote from the article: "Iāve learned that the correct way to build a house is to design the handrail first, then design the stair, and the rest of the house will follow."
Part of the reason you don't notice the complexity of everyday things is that humans have spent thousands of years perfecting the details and methods that let you take your stairs and handrails for granted. And, like, 90% of everything else.
The McMaster-Carr catalog isn't >2" thick because they're trying to confuse you. It's that thick because everything they sell solves a different problem, that somebody worked out the details in years ago, and we live in a world where the experience and solutions embodied in every part gets mass produced and delivered the next day for the cost of mere money.
And then there are manufactured handrail parts, which are a grotesque and simplified perversion of a properly carved handrail (discussed in the link). But at least regular people can afford a handrail for their stairs for safety too.
[0]
https://www.thisiscarpentry.com/2009/07/15/drawing-a-volute/
I spent a bit working at a custom stairbuilder. Handrails that smoothly curve in three dimensions, either to connect straight sections or along a curved stair, are called "tangent" handrails. Constructing these tangent handrails is challenging. It requires both geometrical rigor in devising the plans as well as an individual craftsperson with a very high level of manual/visual skill. While we would work out the curvature of the rail in CAD or paper plans, a huge amount of implementation detail was left to a traditional woodworker, such as carving the profile of the tangent rail to create transitions pleasing to the eye and touch. Additionally, the variable structure of wood grain meant that the choice and orientation of the wood stock often had to be done piece-by-piece, often with custom fixtures.
I was a resident computer nerd there, so one of my projects was adapting a 5-axis CNC router to rough out these tangent rails. We were successful to some degree, but the fixturing and programming for individual tangent sections was very complex. It worked out that the CNC approach was only economical when the design of the stair required many multiples (rare in our case). Otherwise the skill and adaptability of individual craftsman was much more efficient, and the CNC was relegated to constructing jigs or 2D pieces.
My training was in high-end artisan furniture making, and the design challenges I saw in custom stairs were way more complex than almost any other field of woodworking I am familiar with, including the more technically avant-garde furniture makers. The only clear exception is wooden boat building, which appropriately enough was the background of many of my coworkers at that company.
Iāve been fascinated with wooden boat-building for most of my software career, and that interest has driven me towards the marine space. Itās cool to read of other people who have a cross of software/computer skills and woodworking.
I feel that there is a deep similarity in the kind of problem-solving and craftsmanship that fine woodwork and software development both require. Iām convinced some of my (minimal) skills I developed working with my father in woodworking and metal work translates to a better ability to visualize complex systems.
Boat building is a great example of the phenomenon the OP talks about. I agree itās particularly challenging, mostly because _nothing_ is planar. You could spend a lifetime exploring the nuances of stitch-and-glue plywood construction, or bead and cove cedar strip, vacuum-bagging fiberglass, etc. In wooden boats, projects are measured in decadesā¦
The no-code people imagine that we are on the verge of living in a world where that catalog's equivalent in software is just a few months away.
If they were right, it still wouldn't solve the problem analysis problem, but it would certainly help out the parts problem. It would also help move software into a professional engineering phase.
I've got a phrase for when I see people trying to "abstract" over certain types of problems by writing absurdly complicated code that ultimately saves negative time.
"Data abstraction is impossible."
You can abstract an algorithm. Like, how you sort. Merge sort, bubble sort, etc. Normally it doesn't matter to the rest of the code, you just want a sorted list. You can abstract a type or an object. Hey, given some type T and some function T -> int, you can get an int.
You CAN'T abstract a chunk of data which has semantically distinct members inside of it. If someone sends you a structure that has a field in it called "name", then you have no idea what that actually means. You have to get them to tell you what they mean by "name". The hard part being that inevitably "name" means one thing unless the "aux" field is set to a number that evenly divides the "misc" field at which point "name" has to be filtered by some arcane rules.
This is why we have lists like "falsehoods programmers believe about time|names|etc". Time means different things for different applications. Names mean different things to different contexts and cultures. You have to find someone to tell you what all of the individual pieces of the object's internals means to them.
no-code will probably see some success in niche domains where things have been fixed for years. Ultimately, many applications will need hand written code to handle all of the meaningful sub-components of the data being interacted with.
> "Data abstraction is impossible."
That's an amazing point. It explains where the object oriented proponents of decades past went wrong in thinking there would be vendors for "Car" or "Employee" objects.
Yeah, if we look at the real world "car" object, we've certainly over decades established _some_ common requirements that you could call interfaces: turn signals, headlights, a rough common size, etc. We've abstracted it + restricted the interface (through laws and regulations) to the point that you can build a highway or an intersection by following well-understood principles.
But if you want to do something more complicated... it breaks down. Imagine trying to write software to do things with a generic "Transmission" interface. All the complicated logic would still be the responsibility of the vendor so you would be out of luck trying to come up with something more granular than "check_status()" and "repair(parts: List<Object>)" as an interface. Which is so limited as to be pointless.
No, because you can absolutely abstract over data _if it is encapsulated_, which is the whole trick of objects,
You can use a Point from any vendor, without caring if it has x and y, or r and theta, or some other representation inside, as long as it correctly implements the specified interface.
But very few interesting real world objects can be consistently described across different pieces of software like a Point can, which is where it ties into the article.
Yeah, very much this.
Not to mention, if you want to serialize Point, then you _have_ to know what its internals look like and what they mean. There is no data abstraction.
[Okay, so when you serialize Point, you can also serialize _all_ code that will ever do anything with point. However, we generally don't do this (partially because of security issues). But it doesn't really help _that_ much because whatever the serialized code produces when run _has_ to be some datatype where do understand the semantic meanings behind its internals.]
This feels like what you are describing is algorithmic or type abstraction. For your example, it's something like: Given some type Point which (I guess) interacts with some type Graph you should be able to run the Plot method. Or something. And in this case, yeah you can abstract, but it's not what I'm calling "data abstraction." [Maybe my name is just not quite right.]
Data abstraction would be getting some arbitrary Point object and then being able to look into the internals and do the right thing. Whether it's x and y or r and theta.
Now you might say, "that's just bad programming if you do that," but the point is that often times we don't have a choice in the matter because either of the constraints of the project OR because the "do the right thing" part interacts with non-code.
So for example, if you want to send your Point over the network, then you either 1) Need to know the internals, what they mean, and how they work or 2) have to send along code for the remote device to execute to handle your custom internals. Case 2 is generally not done (and presents some security issues regardless).
But this isn't just a technical issue. Names are notoriously difficult because data abstraction doesn't exist. How many first, middle, and last names does someone have? Is there a canonical ordering for the names? Does someone always have a name? What do the prefixes mean? What do the postfixes mean? These questions and more all have to do with what you're trying to use the name for AND what culture the name comes from. There is nothing you can do to abstract away the problem of names. And if you try you'll just end up with a lot of messy, complex code, that breaks all the time.
You may be missing the main point of the parent (no pun intended). Their proposition is that in reality the problem is not how you define a point, but that a "point" means different things to different people and thus cannot be abstracted. I deeply agree with this observation. There are exceptions of course, but my experience is that this holds in the general case.
Are you abstracting over the data or over the behaviour?
> thinking there would be vendors for "Car" or "Employee" objects.
For car object: Tesla, Ford, GM
For employee object: Randstad, Kelly, Trueblue
Much better implementation than the C++ one, too.
...and the corollary to that is turning things _into_ data is often a much simpler alternative to abstraction.
For example suppose you have a program with configuration. You might have a "ConfigurationProvider" and abstract that with an "IConfigurationProvider" and mock implementations of that interface for unit tests.
Or you could simply define a Plain Old Data structure with fields for the configuration data. Then you can construct an instance of that data structure in different ways as appropriate.
The key idea there is: "data" is something that's already automatically decoupled from where the data came from. For code you have to do work to achieve that decoupling, and it will always leak.
On the other hand some of the most powerful and widely used programs are those that operate on byte arrays with schemas in-band (the web) or otherwise specified at runtime (SQL).
Imagine if you needed to write C structs in Nginx or Postgres corresponding to all the business objects and data types theyāre going to touch. Worse, imagine you need to thread them through all the implementation!
Many business applications are written that way. I think there are opportunities on the table to stop doing it.
So. There's json parsers, serde, row polymorphism, anon structs, macros, monads, and dictionaries.
They all do the same type of thing, but that isn't abstracting data. That's abstracting the population of data or the serializing of it.
However none of that is ever going to know that the name field needs to strip out the first three characters, rot 13, and then parse as a guid.
Someone still needs to know what the data you're getting means. And there's no way to shake that.
Although people do try and then we get lists like: things programmers don't understand about names.
> _"Data abstraction is impossible."_
Data abstraction is hard and many people are no good at it, but it's not impossible; it can't be, because it's required.
don't think of data abstraction as one and done, think of it as a process. Maintaining (as in, maintenance-ing rather than obeying) your data abstractions as you make modifications to code is the key to clean code, it's where the real refactoring takes place.
if your brain works like mine does, it is paralyzing to work on code without data abstractions, like always thinking of a staircase as all of its component parts rather than as a staircase
This is why we have ontologies. Still at a early stage of data abstraction but it is getting there.
Well, they are going to be disapointed when they'll realize we haven't even figured out the catalog yet.
Software is half a century old, we are still using flint to produce it. The only reason it looks fancy is because it makes more sparks, and a lot faster.
We're just starting to get out of the alchemy stage right now. Wise men and magicians everywhere telling royalty that if they only pay them treasures that they will reveal the future and show them miracles. Meanwhile it turns out that NSS has a straight-forward vulnerability that everyone just somehow missed.
If you look at the history of chemistry, mechanical engineering, or astronomy then you kind of get the impression that we're probably 150-300 years away from software development working the way everyone already imagines it working.
I swear my lord, this Python oil will do miracles with your data ailments.
Lord: I doth smeared mine data with thine Python and lo, it persists in having holes and errors even unto duplicates. Guards! Execute this man!
You remind me of a quote I heard at a conference (paraphrased badly).
The Data Revolution is like going from the bronze age to the iron age.
It will get industrialized eventually, but that time frame will take much longer and will look completely different from what you expect.
I believe we miss solid common ontology - an agreement what individual pieces of data mean. Then individual pieces of software are either incompatible or inconsistent and must be constantly built anew, resulting in all kinds of bugs.
Heck, chemistry is practically a baby compared to the 1,000+ years of engineering
_> If you look at the history of chemistry, mechanical engineering, or astronomy then you kind of get the impression that we're probably 150-300 years away from software development working the way everyone already imagines it working._
If it is to happen, a serious plateauing of "progress" will need to happen all the way up the layers of the tech stack to the point where app developers or data engineers are now working with tools stable enough that previous generations would at least recognise let alone even be able to work with.
Engineering maturity "nirvana" for software will happen not by us getting more experienced/better at it, but by "progress" stalling in all the tools we use.
Software changes a lot basically because it is easy to change. And because we as an industry value constant work towards making it even easier to keep changing - eg cloud, containers, devops, agile etc etc
In the physical world, things have stabilised and reached a kind of local optimum across the vast majority of areas. Sure there are still incremental improvements happening, and occasional revolutionary improvements in different areas. Mostly the main bits being radically changed are the bits that coincidentally touch computers downstream from the constant churn happening to software.
Also another factor is when I think of physical engineering (and I used to work in civil/structural engineering in the 90s), for nearly all work going on, the scale of the problems being solved hasn't changed. Most building sites are the same size they were, most materials are the same, some regulations have changed, but ubiquitous CAD tools etc seems to be the major change.
So maybe, when we can't make transistors smaller, and we can't make cpus faster, and can't increase memory or storage things will slow down. eg as each layer up the stack plateaus, each layer above it eventually changes from working on new capabilities to making what it already does more efficient, that will eventually (it will take a while) bubble all the way up. Progress slows down a lot and us end developers and engineers are working on a stable set of tools very much in a local (or even global) optimum. Also maybe when/if the worlds population stabilises (in decades/centuries/whenever), that might lead to an eventual limit on how much user data can be mined from people and the scale of the data we deal with might stabilise too even beyond the limit reached where we've stopped collecting more because of capacity limits talked about above.
Heh, I started off trying to disagree with your quoted statement by saying I don't think it would ever stabilise like other disciplines. But by laying out one way it could happen, I think I may have ended up agreeing with you :)
That catalog is here, in the form of libraries.
What no-code people miss is that by the time a piece of functionality is available as a no-code widget, it's been available as a library for years. No-code solutions will never replace developers. They instead free up developer time, allowing them to create more niche or more technically challenging things.
Wix and other website builders have completely replaced developers who make static HTML pages, but those developers have moved on to making more complex things with React and canvas and WebSockets.
Business requirements grow to match these advances. Twenty years ago, an independent store was ahead of its time if it had a website. Today, a store without online shopping - or at least an online catalog with stock information - is at risk. No-code solutions will never be enough, because competitors will use no-code and _also_ pay developers to code better functionality.
Yeah when most people end up working cutting hair, driving cars, carrying bricks and so on... I feel this is going to remain a pipe dream to replace the people copy pasting stack overflow in creative ways (productive developers) any time soon.
Stumpy Nubs Woodworking recently put out a video about making hand rails in a way very similar to how it would have been done when hand-molding planes were still in common use
https://www.youtube.com/watch?v=P6ScjHsD4mI
>_for the cost of mere money_
mere money? talk about overlooking a piece of remarkable complexity
Money is fungible. All dollars, regardless of how you, got them are interchangeable.
Knowledge is not. Spending 20 years mastering piano will do little good if you need to build a staircase.
Shit, spend a couple years learning how to build furniture and then go build a chicken coop. There's shockingly little overlap between those two skill sets! That's not a hypothetical. There's a lot of knowledge that your body learns that doesn't come into play doing carpentry.
_that_ money is fungible is part of the design of money, that complex/subtle consideration is one thing that went into the construction of a monetary system. We could use non-fungible money if it seemed better.
the fungibility of money is similar to the idea that all the steps in a staircase should be the same height.
Giordano Bruno said that whatever has the most value and least cost of storage _becomes_ money.
Some money is more fungible than others. Crypto is a little less so: look at the DAO that tried to buy a copy of the US Constitution and failed, only small $ contributors can't get their donation back without losing most/all in gas.
> mere money? talk about overlooking a piece of remarkable complexity
Wrong framing. "Mere money" because we virtually all have money, there are many ways to make it, and the possession of it does not encapsulate any skillset or knowledge; exchanging it for access to artifacts generated with a massive knowledge repository is like trading sand for glass.
I kept arguing because I thought I was right. I felt really annoyed with him and he was annoyed with me. In retrospect, I think I saw the fundamental difficulty in what we were doing and I donāt think he appreciated it (look at the stairs picture and see if you can figure it out), he just heard āletās draw some diagrams and compute the angleā and didnāt think that was the solution, and if he had appreciated the thing that I saw I think he would have been more open to drawing some diagrams. But at the same time, he also understood that diagrams and math donāt account for the shape of the wood, which I did not appreciate. If we had been able to get these points across, we could have come to consensus.
If you want to understand the value of āsoft skillsā like communication, itās right here.
Being able to understand that there are different points of view present, and finding the bridge between them, is a super power for facilitating teamwork. Like any other skill, some people have more natural talent, but training and practice can help almost anyone.
Studying the humanities, which sometimes comes in for scorn among technical folks, can be a way to do this. Learning to read and write critically about literature and art really starts with learning to detect and think carefully about different points of view: among characters, and between the artist and various members of their audience.
> Being able to understand that there are different points of view present, and finding the bridge between them, is a super power for facilitating teamwork. Like any other skill, some people have more natural talent, but training and practice can help almost anyone.
> Studying the humanities, which sometimes comes in for scorn among technical folks, can be a way to do this. Learning to read and write critically about literature and art really starts with learning to detect and think carefully about different points of view: among characters, and between the artist and various members of their audience.
This strikes me as highly ironic given the incredible narrowing of permitted opinions on campuses over recent years. Try seriously challenging the doctrine and one is liable to be cancelled, reprimanded, chucked off a course - even physically attacked.
Please don't start flame-war topics on HN. This is way out of scope for the article. The GP was simply saying that when you study the humanities, you read a lot of different POVs, and those help you to understand that your way of thinking might not always be perfect, or the best, but that there are many angles to looking at a problem. Thanks!
> Eschew flamebait. Avoid unrelated controversies and generic tangents.
https://news.ycombinator.com/newsguidelines.html#comments
> Please don't start flame-war topics on HN.
This is a disingenuous attempt to dismiss a valid criticism by categorising it as a flame-war topic.
The humanities today, in general, are by no means either tolerant or open (at least in the English speaking portion of the globe).
You're painting things with too broad of a brush. Not only do things vary by professor and institution, the issues you raise aren't particularly relevant ir prominent to a course on Shakespeare's Tragedies, Ancient Rome, most of the field of linguistics... I could go on.
Critics in this area have latched on to a few prominent outliers or bad actors and used that to create a false narrative about the entire system.
You are right. What is happening in the major universities is not representative of the majority of universities. Therefore I agree that, depending on where you look and how you look at it, the state of the humanities is largely intact.
I think, though it is a leap, that the current <insert-dogma-of-the-day> will extend downstream but who knows...
The good/bad implementation of instruction by any one professor or institution is irrelevant to the point made by the GP, especially when they didn't stipulate study at a college level. There are more ways to learn than in a classroom.
Itās probably a bigger argument than I can articulate in the time I have available at the moment, but Iāve long believed that this phenomenon, among many other crises, is driven by the rise to dominance of a constellation of philosophical worldviews which center around positivism (and which extends to various failed attempts to overcome it).
The categories of thought which underpin a humanistic education are denied (a priori), except insofar as they are immediately useful to material well-being. The importance of contemplation of the truth is relativized downwards, and thought must justify itself in terms of action: social change, market value, āimpactā, etc., which is all another way of saying that truth is made an instrument power[0]. We are living the consequences.
In this light, it becomes apparent that the crisis crosses contested political boundaries (although it is admittedly progressives who currently have the upper hand). The main beneficiaries are, as one might expect, the powerful.
[0]: some may reply that truth has always been a socially constructed instrument if power, and that announcing it as such is simply a reduction in hypocrisy. This criticism must be reversed upon the critic: the proposition is itself a construct only believable in a social idiom which made it so.
_although it is admittedly progressives who currently have the upper hand_
I'm not sure this is true. Opposition to this sort of thing has given the world leaders like Boris Johnson and Donald Trump. In the case of Trump, even losing the election was extremely close and the political movement he catalyzed continues unabated and, perhaps, more entrenched.
But I could be wrong: in general it is hard to make an accurate assessment here when the most accessible views into either side are the more extreme outliers that bubble up into the front pages or morning news shows of different news outlets, or pluck the right chord of anger/indignation to trend in social media.
> Itās probably a bigger argument than I can articulate in the time I have available at the moment, but Iāve long believed that this phenomenon, among many other crises, is driven by the rise to dominance of a constellation of philosophical worldviews which center around positivism (and which extends to various failed attempts to overcome it).
I don't think this is right at all. First, positivism is dead. Michael Polanyi killed it (I forget the name of the book at the moment).
Second, when people complain about "the incredible narrowing of permitted opinions on campuses", they don't mean that only positivist voices are allowed. And those doing the narrowing aren't doing so from a positivist position at all. They're doin so from a position that regards positivism as an attempt by a power structure to assert dominance (and probably racist besides).
Let me clarify:
I like Polanyi, but positivism is still very much alive. I mentioned in my post various failed attempts to surpass it, in which I would include Pragmatism, continuations of Marxism, the critical school (excluding some late works by Horkheimer), and postmodernism, all of which retain positivismās negations.
EDIT: to supply some evidence, in Book III of _Science, Faith, and Society_, Polanyi argues that unchecked Positivism would result in a society not fundamentally dissimilar from the Soviet society which had prompted him to write the book in the first place. I encourage you to read it and compare it against your experience, asking whether Polanyi secured victory, or whether crucial aspects of his thought went unheeded.
Three problems (at least) with positivism:
1. Polanyi's criticism: Human observers are not objective. They start with their own biases. That means that any observations of events cannot be 100% trusted as a source of truth.
2. Francis Schaeffer's criticism: Within the structure of positivism, there is no way to know that what you observe is actually data. It has no basis for saying that it should be data.
3. Greg Koukl's criticism: Since positivism is a set of statements in epistemology, and not either direct observation or true by pure logic, positivism says that you can't know that positivism is true. It is self-inconsistent.
Positivism "alive" only by inertia. The position doesn't need to be "surpassed" by something, it only needs to be refuted in a way that can't really be answered - and it has been.
Anyway, none of that was my main point. My main point was that it's not positivism that is driving the closing of minds and narrowing of discourse on campuses.
> In this light, it becomes apparent that the crisis crosses contested political boundaries (although it is admittedly progressives who currently have the upper hand)
Is it? At least in the US, I'm not sure what progressive political victories compare with, say, the long-lasting effects of the Trump administration, especially in the judicial area. More broadly, many progressive causes have been getting weaker in US politics for decades. How many of the places that enacted strong rent control decades ago would be able to do so today, for instance, vs those where it's been weakened?
Following the modās advice above, I would like to avoid taking this into traditional flame war territory.
Perhaps we might agree that progressivism as a political movement has not succeeded in separating economic from social liberalism?
_not succeeded in separating economic from social liberalism_
At the same time that conservatism (in the US) evolved from being more focused on economic conservatism to a mixture, and sometimes a focus on, social conservatism.
This is now the 12th time this has been posted on HN [0], and I see why, it is very insightful.
But his other post, 'submission and dominance among friends,' had more impact on me. Connecting our behavior to our mammal relatives like he does is not exactly novel, but combined with his frankness about his own needs -- especially the profoundly uncool desire to be "on the submissive end of this kind of clear status play" -- the piece was revelatory for me. Perhaps you'll enjoy:
http://johnsalvatier.org/blog/2017/submission-and-dominance-...
[0]
https://news.ycombinator.com/from?site=johnsalvatier.org
I know a guy, just builds stairs. Every significant house builder in the area has him in, all the work he wants. Heard him say on the phone "I have so much work, If I don't want to do a job I don't have to do it. And I don't want to do that. Thanks!"
Sounds like he's not charging enough
I'm not sure the best equilibrium is "I can barely get enough work to fill my time".
That's a very brittle position. Better to have some buffer and then you're able to turn away jobs you don't like. Maybe you leave 10% on the table, but if times get lean you don't want all of your prospects to dry up because "That guy charges way too much".
I'd say like with all freelancers and contractors, it's "I can't double my rates because then I'd lose half my business".
Everyone has to find the sweet spot in their customer's price elasticity.
Kind of a weird tangent but can anyone elaborate on this quote: "trap a drop of water between two other liquids and heat it, you can raise the temperature to at least 300 Ā°C with nothing happening"
I knew you could change boiling point with pressure but is this suggesting that doing something like having a layer of oil on top of an ambient pressure pot of water would prevent boiling? I tried googling for some elaboration but only found discussions of superheating water via pressure changes.
I'm not sure of the experiment that this might be referring to, but it sounds like superheating[0]. Trapping the drop of water between two other liquids could be some version of a "clean container, free of nucleation sites" as described in the first paragraph of the wikipedia article. Other ideas?
[0]:
https://en.wikipedia.org/wiki/Superheating
To me, one of the more interesting arguments against the universe as a simulation came from Sean Carroll. I'm paraphrasing, but he noted that any simulation would be limited and it's possible detail, because it has to use the resources of the real world, and therefore is likely to be less detailed than the real world. And the logic of simulated universes is that they, too, have simulated universes, and you keep on going, and you end up with much greater probability that you're in one of the simulations then that you're in reality.
But, it would also follow, that the version of reality that you experience is not very rich in detail relative to whatever is "real."
The author, John, says that blindness to detail can make you "intellectually stuck." So I would wonder if an implication of Sean Carroll's argument, an implication of the universe as a simulation, is that we would experience less detail and therefore be more prone to getting "stuck".
It seems almost touchingly naive to assume that a superuniverse would have similar limitations as this one. We don't know _anything_ about how different universes might be structured.
That's not just because we can't access them, but because a superuniverse system assumes a coherent set of metalogic that makes certain universe features possible and other features impossible/unlikely
Not only can we not access that, there's absolutely no justification for generalising from this universe to any metasystem in any way.
_Especially_ if you accept the premise that this simulation has less detail than the original. Because then of course you don't know what detail is missing, or what detail is possible.
So that argument nicely destroys itself.
So you have to start with say, a universe with _no_ constraints at all. Impossible to imagine. What would then be the purpose of a simulation at all? There wouldn't be any, because no constraints means everything is knowable instantaneously. A universe with no constraints means that nothing really ever happens.
But supposing there were some reason to run one in a universe with no constraint (impossible, since even having a reason to run a simulation is itself a constraint), would the simulation have any constraints at all? I don't see a reason why. Infinite compute available to simulate everything all at once would be trivial.
So we can then deduce that a universe in which a simulation is running has constraints. We do not know what these constraints are, but we can be certain they exist.
Some of the constraints within the simulation, unbeknownst to the entities inside the simulation, are the constraints of the universe in which it is running, and not constraints imposed by the simulation itself. Which constraints are these? Well, it would stand to reason that these constraints would appear to be fundamental. Maybe in ours it's the speed of light demonstrating the compute limit of the simulation. Maybe our simulation exists to better understand their constraints, and so something much more difficult to understand is indicative of their constraints. Maybe those constraints are reason itself. Maybe the fact that the process of the universe unfolding is indicative of their constraints. Maybe something else, maybe I'm wrong and its something that doesn't appear fundamental at all, but it doesn't really matter. _Information leaks into your universe from the universe in which it exists._ No matter the nature of the "bare metal" universe in which your universe is running, there will always be some aspect of yours that is reflective of the higher level one, and there will always be a way to tell that you are in a simulation. All that is needed is to examine the details of how yours works, hand wavy and much harder than it sounds, but theoretically it is possible. You cannot create a simulation in which the entities inside cannot possibly know they're in one, and in which nothing at all can be known of the higher reality.
Hmm maybe, but also what about the argument that the most likely designers of a simulation such as ours would be those who inhabit a universe like ours? A good reason to simulate a universe would be to ask āwhat if?ā questions, which would seem of greatest utility to those living in similar universesā¦
Or our universe may literally be a game of dwarf fortress running on commodity hardware.
These are all similar to the epistemological arguments of unknowable truths. Interesting but ultimately fruitless.
>dwarf fortress
Ridiculous! It's much more probable that our universe is a game of Factorio being played by an idiot.
Take a look at abstract art. There's no requirement that a creation resemble the creators in any recognizable fashion. If we're a simulation, we have no idea if we're the creator's equivalent of cave paintings, baroque, modern, anime, surreal, noir, dadaist, cubism, any if it.
> what about the argument that the most likely designers of a simulation such as ours would be those who inhabit a universe like ours?
So your argument is Genesis 1:27? "So God created human beings in his own image. In the image of God he created them"
Big, if true.
Who is to say we are experiencing rich detail? How do we know we arenāt stuck? Stuck on the planet? Stuck in this universe? Stuck in our minds?
Itās an interesting theory though but Iām not quite sure I buy the detail resolution argument as the one that defeats it, rather similar to Descartes in that you canāt trust your senses itās hard to care about, especially with no proposed test (which wouldnāt the test be part of the simulation too?). So you kind of run into the āwell who created _that_ kind of marathon and I donāt know of any philosophical conjectures that have those characteristics that have been solved. Could be wrong though.
>Who is to say we are experiencing rich detail? How do we know we arenāt stuck? Stuck on the planet? Stuck in this universe? Stuck in our minds?
I largely agree with where you're at on this. What we experience as normal might be extremely low detail relative to some more detailed underlying reality.
However, the flip side is that we know quite clearly what it is for a world to be less detailed than our own. So it seems like there's room to go "further down".
I also don't necessarily think it's a decisive argument, but what's interesting to me is that argument in particular stresses detail, and what it has in common with this article leads to an implication that we're more likely to get "stuck" in a cognitive sense, so that there are shapes to our thinking that correspond to the level of detail we're able to experience.
> However, the flip side is that we know quite clearly what it is for a world to be less detailed than our own.
Do we? I'm not trying to do the Socratic Method here with endless questioning but how can I know or experience what less detail means? Don't I only have access to all of the detail I could have access to by definition?
I take the notion of "resolution" to relate to things such as the range of visible color, complexity of shapes and sounds. If you agree with that assumption (and you don't have do) that details are of that nature, you can always conceive of less, but not necessarily more.
I don't think visible color is a good example because we still detect the resolution of it through instruments. Complexity of shapes is an interesting one. Idk about sounds though - depends on what we mean by complexity here because while Bach is more complex than idk whatever people listen to these days I'd still put them at the same "resolution". Though I guess there's a lot of things to define as far as terms go.
When I think about resolution I'm kind of imagining things that aren't measurable by us at all or possible to experience, because if we can measure them or experience them then they are part of our "resolution" sphere I'd think?
It's definitely interesting to think about. I'd love to get your thoughts and those of others too.
This is a bit out there, but I've been binge watching people talking about their NDEs lately on youtube, and some mention that our reality feels less real, almost cartoonish, after having experienced the beyond.
That is nonsense. We can do "unlimited" detail, by just-in-time generating details on observation and extrapolating from some mathematical deterministic constants tail.
Not everything that can be seen, needs space- lots of it can be reproduced with formulas just in time on observation.
The only observable thing would be a "detail snap" and the same "high-res" details were the same hash is produced. Which could be avoided by hashing in the observer, at some level of detail.
Carrol calls it the "Resolution Conundrum" [0]. If you believe in the underlying logic that there is a kind of runaway effect, of simulations within simulations, etc, it's inevitably going to be the case that the most probable simulated universe is the one with the least available computing power. So whatever trickery you use to get around the problem of low resolution just substitutes one problem for another, and then the new thing runs up against limits of computing power.
We would be most likely to experience the simulation that's most likely to generate noticeable artifacts as the simulation runs up against limits.
0:
https://www.preposterousuniverse.com/blog/2016/08/22/maybe-w...
It could also be that the most probable simulated universe is one in which it is very hard to simulate other universes.
We do 2D simulations, games of life, etc. If our universe is simulated, why wouldn't the "parent" one be vastly more complex (more dimensions) and a much better place to run simulations?
It could be highly hyperdimensional and contain unimaginable levels of detail, even "time" in our universe could be a construct (or multidimensional) in the parent universe.
the thing that I don't understand is why people think that there would be a simulation of a universe; surely it would be more likely that this is a simulation of "you" (well me as obviously you lot aren't real) - vastly simpler and more plausible
No we can go deep but no where near unlimited zoom. Watch a video that shows a fractal zoom to say 10^20,000 and that feels like forever but itās nothing next to 10^(10^20,000) let alone anything actually large.
Even seemingly simple problems run into surprising issues when you want arbitrary precision. Whatās the oneās place digit of Ļ^(Ļ^(Ļ^Ļ)? I mean sure you could fake unlimited zoom by just picking arbitrary numbers like say 4, but thatās no longer zoom.
Heisenberg discovered the simulation.
This always makes me think of the 'uncertainty principle' and quantized values in physics. As a layman, it seems to me like both of these combine to limit the amount of information needed to describe the state of the universe.
I don't know enough about quantum mechanics to check if this makes sense. My wild imagination suggests that 'waveform collapse' could be modeled as the result of a simulation being forced to resolve a state of a specific particle for further calculations. I imagine it is hard to come up with a plausible 'simulation model' for this that includes features like probability wave interference and entanglement of particles.
Such a model would be amazing for insight into quantum mechanics, and also have far-reaching consequence for metaphysics. It would strengthen the argument for us living in a simulation, if the peculiarities of quantum mechanics can be modeled as artifacts of a particularly efficient emulation.
And to add to the sibling reply by OneTimePetes, even if the simulation had to generate a nearly infinite amount of details, the simulation itself could have its internal time slowed down, and the beings inside the simulation would not be able to perceive it as such.
The problem is with finding an argument as to why, as a matter of _principle_, we would expect simulations to be accompanied by such fixes.
If you buy into the bootstrapping premises (which I don't!) that simulations beget more simulations, and that this runs into limits of computing power, there are clear and consistent principles driving that runaway process.
However, so far as I can tell, it's not clear that there would be any _principle_ leading us to conclude that there would be a standardization of simulations relating to image hashing or fiddling with the experience of time. Those types of fixes would fall into the category of idiosyncratic details that either would or wouldn't hold in particular one-off scenarios, and don't generalize into any transcendant probabilistic implication that such experiences are most likely. That kind of generalization is what would be necessary for it to be responsive to the argument about simulations.
There is a fun little esoteric/new age youtube video* about reality glitches that explain how reality is (in loose terms) more "mathematical" in nature than "material" (as we mostly perceive it). These are the reasons mentioned:
1) speed of light is an absolute cosmic speed limit
2) wave/particle duality
3) conservation of energy - everything is eternal, nothing can be truly created or destroyed
4) quantum entanglement shows that reality is non-local
5) singularities at center of black holes where seemingly all physical laws break down
[*]
Ugh nope. This video is not only too naive at the intro (could be just a watchbait), it further doesnāt explain what it claims, neither is it consistent nor starting from any preexisting model. It starts from limited factoids and uses them as fundamentals to go to some crazy not-even-a-hypothesis. E.g. in GR the conservation of energy is an āit really dependsā thing, not a universal law. And c is not a galactic speed you or your flashlight may āhitā. To save a stranger some time on c-part to make their own idea of this video, itās right in the intro and then at 11:30+.
I've only watched the the first five minutes of that video; I think he has some fundamental misconceptions that make it difficult to take much from what he's saying.
e.g.
for #2 wave/partical duality (which isn't necessarilly even a thing, depending on what foundation of QM you subscribe to) he says that everything is energy and that energy is light; this is not correct (the 2nd part)
for #3 energy is not necessarily conserved over long time frames in our universe
#4 depends on which foundation of QM you subscribe to, but doesn't necessarily require non-locality (AIUI, if your foundation results in this problem, then you must chose between locality or reality to explain things - reality in this context not really meaning what it might mean in common parlance)
#5 singularities appear in black holes because the maths breaks down because we don't have a unified theory of QM+gravity; they aren't magical. Also the big bang wasn't a singularity in the way he seems to be implying, again AIUI
I'm no expert, but I would suggest you review some more technical videos before reading too much into the link you posted (I suggest PBS Space Time / FermiLab / Sabine Hossenfelder)
I don't know, humans exist, comparative to the most granular levels of reality, at a macroscopic level. That leaves plenty of room for lost detail in a simulated universe within ours that being at our scale would never really notice.
Separately, how do you know that we aren't prone to getting stuck? We all do sometimes-- how would we ever know if it's more or less than "normal" in an non-simulated universe?
Carrol is a physicist so he may have more technical reasoning grounded in physics-- I don't know-- but it sounds like a rhetorical rather than scientific justification.
>how do you know that we aren't prone to getting stuck?
I never said we aren't prone to getting stuck, in fact I was trying to suggest the opposite although maybe I wasn't very clear.
A possible implication of being in a simulation, if we are in one, is that maybe we're _more_ likely to get "stuck."
That possibility was the very connection between this article and Sean Carroll's argument that led me to make my comment.
I mean that you have no benchmark to evaluate the relative metric of how stuck we get. Certainly we get stuck sometimes. How do you know that it's less often than in a simulation? Perhaps on a non-simulated universe there is _*no*_ getting stuck. Or it happens so little as to be a virtually unknown concept. Relative to a universe higher up we might be getting stuck constantly.
The entire idea is a counterfactual with absolutely no way of evaluating it as evidence for the opinion that we're not simulated. It may even be true that successive layers if embedded universes would have more problems of that sort, but we still lack any way to establish that our own rate is the baseline. Considering that physicists themselves lack consensus on the question, I don't see how one physicist's philosophical musing on the topic can be taken as more than (currently) unprovable speculation.
>I mean that you have no benchmark to evaluate the relative metric of how stuck we get.
Well I only just posed the idea, and a lot would depend on working out what the terms mean. Of course it's difficult to reason in cases with vague terms.
Where I part with you is in the conclusion you draw from the vagueness. You rattle off a string of questions, to each of which my response would be along the lines of "good question! How we answer that depends on XYZ, I think this is more likely, this is less likely. In this case you do this, in that case you do this other thing, etc."
For me, those kinds of questions are prompts that allow you continue a constructive conversation in a spirit of good faith and intellectual curiosity. It's the mark of an educated mind to be able to entertain a thought without necessarily accepting it, and all that. But you don't seem to want to have the questions engaged with or answered so much as you want to use them in a performative expression of helplessness to show that the whole exercise is doomed. To me that's skipping about 3, 4, or 5 steps in the middle.
>Considering that physicists themselves lack consensus on the question, I don't see how one physicist's philosophical musing on the topic can be taken as more than (currently) unprovable speculation.
Again, you're chasing ghosts. Before you said "how do you know that we aren't prone to getting stuck?" when I had never suggested we are not prone to getting stuck. Now you're saying can't "be taken as more than (currently) unprovable speculation." I agree! It is speculative. Where is this disconnect between what I'm saying and what you're responding to coming from?
Submitted a few times, discussed twice with a considerable number of comments.
https://news.ycombinator.com/item?id=22020495
(2020; 115 comments)
https://news.ycombinator.com/item?id=16184255
(2018; 296 comments)
Making stuff to specified dimensions has a well understood workflow. Order of operations matters. That problem dominates machining. It's a big deal for carpentry, too. It's not hard. Most people figure it out before the second time they build something.
(Although making anything stay at specified dimensions from Home Depot "fresh from the tree" lumber is difficult. That is why kiln-dried lumber exists.)
Wooden boat tension joinery - now that's hard. You need the results of several centuries of puzzle-solving and dealing with the effects of water to do that well.
There was some article a few years back on HN about two incompetents trying to build a cabin in the woods, with too little experience, too little planning, and too much drinking. Their worst mistake is that they didn't know that you build roof trusses at ground level. (Or just buy them prebuilt.) Then hoist completed trusses into place. They were trying to stick-build roof trusses up in the air at roof level. That did not end well.
This stuff isn't rocket science. There are books easily available, and lots of people who've done it before.
You are thinking of this article, which was a good read:
https://www.outsideonline.com/culture/essays-culture/friends...
This was a great read. Another way to appreciate the endless level of detail if you look hard enough is the Coastline Paradox[1], an idea expanded and articulated by Benoit Mandelbrot.
Life has infinite levels of depth if you keep peeling back layers and look down a bit further. And when you think you know something, dig deeper and you realize you know less than you thought.
https://en.wikipedia.org/wiki/Coastline_paradox
I have no formal education in physics or metaphysics or anything but it has long seemed obvious to me that reality is infinitely fractal in both the macro and micro. whenever a new particle or whatever is discovered, it seems like many people get excited that we're on the verge of understanding the elementary fundamental building blocks of reality or something, when I can't see any reason _not_ to believe that reality will continue to be fractal infinitely when "zooming in."
if one can accept this, then one can accept that any perceptions about reality he may have can only be at best a "useful model," and such models should be constantly updated in response to new data and observations. of course, this means that there is quite a bit of incentive in deliberately shaping the models people use to perceive reality...
If the universe is infinitely large (as we suspect it is), then it should also be infinitely small (even beyond our capacity to measure it). Otherwise, it's not really infinite.
The matter of whether the universe is or isn't infinite is definitely not a settled matter.
> Because we cannot observe space beyond the edge of the observable universe, it is unknown whether the size of the universe in its totality is finite or infinite. Estimates suggest that the whole universe, if finite, must be more than 250 times larger than the observable universe. Some disputed estimates for the total size of the universe, if finite, reach as high as 10^10^10^122 megaparsecs, as implied by a suggested resolution of the No-Boundary Proposal. [1]
Secondly, rays (the geometric construct that have a point origin and extend infinitely in one direction) _are_ infinitely long, but any point on them is finitely far away from the end. Why couldn't the universe be the same way -- unbounded in one direction (of scale), but bounded in the other?
[1]
https://en.m.wikipedia.org/wiki/Universe#Size_and_regions
The "colonizing the universe" diagram seems to be taken from this paper:
https://www.fhi.ox.ac.uk/wp-content/uploads/space-races-sett...
There is a tacit presupposition here that needs to be made explicit if we are not to betray it, namely, the distinction between properties and attributes. A property is something that follow from the essential nature of a thing. A mere attribute is something that isn't and could vary without contradicting the essential nature of a thing. So a property of triangles, following from Triangularity, is that its angles sum to 180 degrees. But the color of a triangle is an attribute that does not follow from its Triangularity. It may be important in some context, but it must be understood as secondary (of course, in a secondary way, you could say that it does tell us that the thing in question is capable of having that attribute, but the attribute itself is not the consequence of its essential nature).
A specialist can suffer from the law of instrument for analogous reasons. If you put an immunologist in charge of policy, he may reduce the common good to merely those things with immediate and obvious immunological relevance to him. Sure, freedom from infection is important to the common good, but it isn't the only thing and indeed optimizing for that in excess incurs an enormous cost to the common good.
Our education system, sadly, does not prepare us in foundational ways to reason and understand properly and this lack of intellectual and moral discipline can manifest in fixations (undisciplined passions, like in this case fear, can produce these fixations).
For some fun historical context, checkout Moxon's "Mechanick Exercises" or Nicholson's "Mechanic's Companion", which are both treatises on various tradecrafts (including carpentry and joinery, which is where stairs would come in) around the late 1600s and early 1800s, respectively. Those two books essentially serve as the foundation for the modern hand-tool woodworking movement.
I mention this because "Lost Art Press" is currently on the front page, who are an independent woodworking book publisher near the front of this movement
To make matters worst, what you see with your open eyes, is not one image, is a collection of images with all kind of distortions, blur and other smudges that your brain "photoshop" away to produce an image that conforms to the flawed/bias criteria that your brains considers to be a good image. The same happens with your ears and your taste buds. Reality in our brains is pure poetry.
In fact this is why self-driving cars are so hard! A complete newbie driver faces all of these tiny issues, which they _learn_ to forget as they become practiced. There are so many subtleties in these scenarios, that it's mind boggling.
I always use the title of this wonderful essay to explain how "AI" works. Deep neural architectures are so performant (in part) because they make use of all these minuscule details which our brains shield us from seeing.
Can you provide an example because I do not understand. Say some small detail that humans did not see because it was so small and our brain ignored it. For me ANN represent a more complex, giant function that we "interpolated" with tons of data , give same problem to some smart people and sometimes they will gtive you the exact solution with a proof.
I thought about this the other day, thinking about how (some) people look down on other people and pat themselves on the back for being more knowledgable, better educated, more sophisticated, having better taste, being better bred (I'm from the UK lol) etc etc. But then there is always someone above one, looking down on one with the same arrogant prejudices. Who are these people that look down on me as an ignorant, blundering oaf, and what would I need to learn to fit in?
Not that even if I did know what I was missing I would make the effort, heh.
If you want to know how much reality gots, build a house by yourself.
Not quite anything to do with _reality_. More just about how seemingly simple things (stairs) actually are more complicated.
That isn't much of a revelation? Although it seems like plenty of startups looking to disrupt something run headlong into the issue.
Anyway, I think a more proper title is something like "Everything is More Complicated That it Looks"
On stairs, I remember watching about this one on Ancient Aliens or something like that, lol
https://www.ancient-origins.net/unexplained-phenomena/myster...
No single person understands everything about how a pencil is made.
In case somebody doesn't know the reference:
https://mises.org/library/i-pencil
I recently had to build some stringers for a deck. We had an original CAD with 11 steps (for a total of 12 including the top). When we produced the layout somehow we ended up laying out 10 steps instead of 11 because I wanted a specific tread depth (~10 inches). I figured why would one fewer step matter for anything structural? Well it turns out that 10% extra cut into my 2x12 left about an inch less in the beam of wood which was enough for a 12' span to bounce as you walk up and down!
I have learned that a universal unavoidable element to any model is that it oversimplifies the thing it is trying to explain. Sometimes that is OK and using the model we gain useful insight into a system. Other times the simplification of the model ruins its explanatory power. Academics can sometimes forget this I think in their eagerness to understand and explain things. Some models that have varying degrees of usefulness are the following: Newton's F=ma, supply and demand economics, Marxism vs capitalism, colonialism, post modernism, globalism vs nationalism, conservatism vs liberalism. We have to be very careful applying an overly simplistic model to complex phenomena. Sometimes our desire to understand something at a high level erodes our ability to understand it at all, especially when modeling human behavior where interactions fundamentally occur at the level of the individual.
Colonialism in my mind is a good example of this. We have a model (a simplification of reality) that says this is a specific time period in human history in which people groups migrated between areas trampling on indigenous cultures in the process. The reality is that this has happened since the dawn of civilization and will continue, that some colonists at some times respected the native peoples and the movement was mutually beneficial. I believe colonialism as a model has limited usefulness and it's much more useful to look at specific movements of peoples and what happened as a result.
This is why robotics is hard! Every problem seems simple but thereās a bunch of detail everywhere and even simple problems are super difficult.
So how do you trace the stair cuts?
exercise for the reader
Aha that was different than I expected
Is it weird that I don't like his stairs? Also, I don't know many people that build wooden stairs with brackets. Usually, you get two pieces of suitably long and wide lumber, clamp them together, and cut them together. So it looks something like this (
https://www.lowes.com/pd/Severe-Weather-4-Step-Pressure-Trea...
). This solves most of his problems.
Basically, I get his point, but I think he chose a really poor example to illustrate it. All I'm hearing from this guy is: "I've never actually had to build stairs."
Cutting stringers instead of using brackets would implicate pretty much exactly the same set of details and problems.