💾 Archived View for bbs.geminispace.org › u › sugar › 3465 captured on 2023-11-04 at 16:43:09. Gemini links have been rewritten to link to archived content
⬅️ Previous capture (2023-09-28)
-=-=-=-=-=-=-
Re: "The ethics in technology"
Yes! To that end, CS/Programming/IT/etc degrees at unis need need need to have more Humanities and History courses as graduation requirements.
I actually did have an "Ethics in Computing" course, which was honestly the most interesting course I took.
2023-07-24 · 3 months ago
Another interesting link, thanks.
But--really? You are pointing personally to and insulting a senior Chrome developer as someone who has done harm to the web?
Between someone who is an expert in the field and some random commentor on Gemini, I know who I think has done more to benefit the world. Maybe you forgot how the web was before Chrome and Safari. It may not be perfect now but it's a good sight better than it was.
His post is explaining how to effectively advocate for what you want the web to be. That seems like useful information.
Look, Gemini is not anti-web--it says so in the FAQ--and I hope it's not a place for a free for all pile on against big tech.
It is, I hope, somewhere where people who appreciate the actual benefits of technology and the very real downsides can discuss them--without the kind of "us or them" crap or personal attacks that have broken modern political discourse.
There are a range of opinions represented here on Bubble and you'll find plenty of agreement.
Just: would it be possible to tone down the personal side, please?
Thanks.
2023-07-25 · 3 months ago
I agree with OP. Some ideas are inherently threatening. Merely suggesting them is beyond the pale. To have come up with such a proposal without thinking through the basics of how it could affect people is a mistake on the part of the team who suggested it. The team should have had an extended discussion and FAQ section to head off the controversy. It's true that this is a weakness of technical folks but reflecting that labour onto the readers is not the right solution.
I don't know what you're asking for beyond what's already in the proposal. I've copied some of the obviously relevant parts below. It's not like they're hiding the problems and they are certainly not hiding from discussing them.
Honestly, I really struggle to understand what's going on here. It used to be that critical software was developed in secret and blatantly violated privacy. Now that it's in the open with honest and open discussion, plus legal enforcement around privacy, that's not enough?
What more do you want? To prohibit ideas from being discussed at all because they mention forbidden keywords?
What we have here is tech utopia: responsible stewardship, open standards, open discussions, real competition, no lock-in.
The whole tech world gets it, everyone is on board, privacy is built in--except that you can hop onto a social network and splash your identity about the web and ruin you life in a hundred different ways, if you <choose> to. And people do. But that's not down to the tech.
How can we ensure attesters are not returning high entropy responses?
The browser must enforce top-level partitioning for all attester tokens, so that no two domains can conspire to track the user across sites using this mechanism. We also intend to limit linkability between relying parties (websites requesting attestation) and the attester in a way that prevents scaled tracking even if the attester were to collude with websites, while allowing for debugging of broken attestations.
Providing a signal that is unique to the attester could be hazardous if websites decide to only support attesters where certain signals are available. If websites know exactly what browser is running, some may deny service to web browsers that they disfavor for any reason. Both of these go against the principles we aspire to for the open web.
Lots of respect Morgan. I think to try and respond somewhat tersely to your detailed comment: the issue is not simply the proposal, but the idea that the proposal doesn't come with the requisite understanding of the reaction it will create. Rather, it uses the entrenched power this developer has to create rules of engagement. It reads rather like "let them eat cake". Just because we are not in the ie6 days doesn't mean this gets a free ride. To make it clear: the proposal is fine*, but when the uproar doesn't prompt self reflection, I think that's not fine.
Thanks @gyaradong :)
I guess I read his post the other way: he's not making the rules--none of this is new--he's letting you know how it works. He says "before and after I was employed by a browser vendor" so it sounds like he's familiar with this from all sides.
Actually "after" puzzles me a bit, I had assumed from OP that he's one of the ones working on the attestation proposal but now that I check I don't actually see a link, his name is not on the proposal. Sorry for the confusion.
Honestly this whole story looks like clickbait from the tech jounals; e.g. arstechnica "Google’s nightmare Web Integrity API wants a DRM gatekeeper for the web" is just nonsense.
Should the engineers have to anticipate and deal with mainstream media firing sensationalist nonsense at them? ... ... ... well, the whole rest of the world gets the same treatment these days, so I guess the answer is "yes" :/ ... but it doesn't seem like a good use of anyone's time.
I am very happy to say I don't read the tech news any more :)
I'm glad this story filtered through to Gemspace though, it's an interesting one. As long as you skip the headlines.
Thanks. Sorry if I was a bit harsh with my response, when I wrote "you" please read that as "reddit commentors piling on without reading the article" and not you personally :)
Personally I am sick of faux activism used shamelessly for career advancement and social media promotion. The hubris of a pop singer 'making the world better' by glomming onto a hot political issue, or artists I personally know who literally made careers of piling up garbage... We are techies, some of us smart, but entirely replaceable.
The world would be a better place if everyone stopped grandstanding. If something makes you uncomfortable, don't do it, quietly and let others decide if they can afford to not do it, or if they care. Someone will participate in the next Manhattan project, once the door is open. But we are talking about a browser, and its way too late for any privacy.
Well, seems I caused quite an stir up. Sorry!
But I think it's something that has nagged me lately, as a developer myself. I too, have been tasked with things I've just gone after the «let's solve this problem» while I've not considered the effect it has, or ignored it because this is what the Boss wants, and they pay, and we obey. Not everyone can afford to face the boss and tell them «Not going to do this» of course.
What can we do/afford to do? Is it correct to bite back like this guy does to defend a line of work?
I would stop short of say, making missile guidance systems, as that would make me feel bad. Even if it meant being unemployed...
As for privacy-invading stuff... I've been screaming at the top of my lungs for decades, but my otherwise intelligent friends are like "But google apps and credit cards are so convenient...". We lost that battle long ago, and no one should feel too bad building tracking software preying on the sheeple who voluntarily click through agreements giving up their privacy in exchange for a small convenience.
What google did here is a tactic that I've personally seen at companies that I have worked for. It's interesting to see it play out in public:
Leadership decides that they want to start killing puppies because it makes some metric go up. They task you, the engineer, with coming up with an ethical and humane way to kill them. They're looking for a win-win scenario.
If you push back and tell them not to kill the puppies in the first place, they will dismissively tell you that those concerns were already weighed and the decision was made to move forward with it. Basically, if they wanted your opinion, they would have asked for it. After all, your job is to solve problems, not decide the company's strategic vision.
Of course, you can still say "no" and put your foot in the ground. You probably won't be fired. But be prepared to burn through basically all of your social capital doing so. You will be labelled as someone who "gives unconstructive criticism" and "can't make forward progress". Even your peers will turn against you, because... surprise, not everyone shares your exact ethical framework, and you're making things difficult for everyone else.
I have met some very intelligent people who are capable of the political maneuvering to pull off saying "no" to leadership without burning bridges. But they always end up being managers and not ICs, because people like that thrive in that environment and often self-select into it.
some slightly disjointed thoughts:
2023-07-26 · 3 months ago
@mozz I don't see any particular reason to think that that's what's happening here. Perhaps I missed some detail.
The blog post linked in OP was talking specifically about this type of web proposal and covered this question as "Occam's Razor" ... usually the simplest possible explanation is true, it's just engineers doing their day jobs, no complex back story.
I covered the same topic it in my post "On Profit" from my own experiences, not related to Chrome. Mainstream media and commentators are very quick to assume something dramatic is happening--scary motives, devious long terms plans. Usually, in my experience, it's just someone doing their day job.
Again in my limited experience, the thing I've seen a lot that engineers hate that comes from management is cancellation, not coercion.
OP was about engineers in difficult ethical positions--which is obviously a real problem.
But there is also an entirely different problem, which is when engineers honestly do their jobs with the aim of making the world a better place--or at least not worse--and have to put up with being told over and over that they / their ideas / their company are evil. It leads to a kind of numbness to input; the outside world just throws so much crap that it's hard to stay positive.
I've never seen the first problem firsthand, but I've seen the second lots of times. Which is why I wrote about it. Almost all the engineers I've met are not just ethical, they're unusally so--highly principled people. Of course there are exceptions. But it would almost always be a more productive interaction overall to start from assuming the engineers are trying to do the right thing for the right reasons, then <check carefully> as appropriate, based on actual fact.
One last note.
It's entirely possible for honest people with good intentions to do things that cause a bad outcome. The web has not exactly turned out to be what we dreamed about 20 years ago, and there is of course a complex interplay of contributors to that in which the tech giants have played a big part. An ethical look at what we have now combined with a time machine would cause some big changes, and I get that people are wary of tech developments as a result.
But, we still don't have a way to predict the future. There is no way to guarantee that the next change does not cause something else bad. What we can do--the best we can do--is exactly what is happening with this proposal: open discussion, frankness about the possible downsides, run an experiment and see what happens--be prepared to abort the whole thing.
This seems to me to be a pretty good state of affairs, because "stop the world" is not one of the options available. Now maybe <that> is a problem you could tie back to management--and ultimately to capitalism. There are some good posts around Gemini on possible alternatives ;)
Thanks.
@Morgan
@mozz I don't see any particular reason to think that that's what's happening here. Perhaps I missed some detail.
Under the "What I should do then" section of the blog post, the author suggests that you:
I'm not trying to be uncharitable here, really. I think the author's rules are probably all learned from experience and are *true* in the author's environment as well as my own. But I'm sad when I look at them because the trend that I see is "stay in your lane, stick to the implementation details...". Which is something that I've seen a lot in engineering orgs when ethical concerns (or business concerns, for that matter) are raised.
But there is also an entirely different problem, which is when engineers honestly do their jobs with the aim of making the world a better place--or at least not worse--and have to put up with being told over and over that they / their ideas / their company are evil. It leads to a kind of numbness to input; the outside world just throws so much crap that it's hard to stay positive.
I agree with you that this is a problem. Google is not evil (because we shouldn't anthropomorphize corporations) and the people who work there aren't evil either. Nobody is backstage twiddling their mustaches and scheming up ways to screw people over. There are many things that google produces that are ethically neutral or even downright positive.
My opinion is that all of this stuff emerges slowly, organically out of a system aggressively tuned to optimize stock value for shareholders. Google just happens to be crazy good at generating value, and its incentives can often end up misaligned with actual human beings. Which sucks for the people who work there when it happens.
@mozz well put--thanks :)
@mozz, with all due respect, corporations are specifically created to be anthropomorphized! And they can be, and usually are, evil -- golems designed to extract every penny of profit -- everything else be damned. And while corporations don't sit around coming up with evil ideas, the executives are empowered to sit around and come up evil ideas! Screwing people is not a goal, but is not a deterrent for implementing profitable strategies, and if that is not evil, we may have different definitions.
2023-07-27 · 3 months ago
The ethics in technology — We developers should stop just looking at the technical side of our work only. There’s social, economic and values to be taken into account when we put our minds to solve a problem. We tend to go blindly into it, without thinking what it can cause when it is released into the world. It’s like if we put a bunch of developers into a secret project to develop an Internet World Wide Nuclear Bomb a là Project Manhattan… the leaders shouldn’t really have to hide what they...