💾 Archived View for dioskouroi.xyz › thread › 29399828 captured on 2021-12-03 at 14:04:38. Gemini links have been rewritten to link to archived content

View Raw

More Information

➡️ Next capture (2021-12-04)

-=-=-=-=-=-=-

Researchers shrink camera to the size of a salt grain

Author: gmays

Score: 157

Comments: 98

Date: 2021-12-01 02:05:11

Web Link

________________________________________________________________________________

tdrdt wrote at 2021-12-01 07:55:06:

If you take a look at the paper it is only about the optics. Which is still a big achievement, but I think the complete system is a little bit bigger than a grain of salt. So the article should be taken with a grain of salt as well.

https://light.princeton.edu/publication/neural-nano-optics/

ksec wrote at 2021-12-01 12:21:11:

>Neural nano-optics outperform existing state-of-the-art metasurface lens designs in broadband and it is the first meta-optic imager that achieves high-quality, wide field-of-view color imaging.

Am I correct in saying this is still a Metalens but getting Neural / ML enhanced?

mensetmanusman wrote at 2021-12-01 13:25:27:

You could make a rice array located at various locations around a room and reconstruct an image through post processing.

hutrdvnj wrote at 2021-12-01 08:27:08:

> I think the complete system is a little bit bigger than a grain of salt.

How much bigger, does anyone know or can estimate?

shrimpx wrote at 2021-12-01 02:39:43:

That guy Felix Heide is a research powerhouse. He and his group published 4 papers in SIGGRAPH 21, 7 papers in CVPR 21 and 6 papers in CVPR 2020, wow.

https://www.cs.princeton.edu/~fheide/

randomsearch wrote at 2021-12-01 07:50:57:

I know nothing of his work, and he may well be an excellent researcher, but the use of “number of papers published” as a proxy for the quality of someone’s research is one of the biggest problems in academia. What matters is the precise nature of a researcher’s work, which is a qualitative measure that cannot be summarised with a simplistic metric. It’s so odd that we even discuss such things; it’s an overused analogy but “that Einstein publishes 5 PLA papers a year” vs “that dude developed the theory of general relativity” is so revealing.

Don’t mean to pick on you personally, but as the top voted comment it feels like we’re rewarding this harmful perspective on academia. I’d like to say “academia needs to mature” but the problem is actually that it’s gone backwards, and undoing that invasion of management consultant culture and subsequent game playing into a somewhat altruistic and intellectual vocation seems like a very hard problem.

boibombeiro wrote at 2021-12-01 08:44:42:

The parent post used the number of papers at the top conferences in their respectives fields.

It means they are unquestionably pushing the state of art fowards. Or the system is completely broken.

I see your point though. This is a recurrent discussion.

I think the change of paradigm was positive. Nowadays, papers are published in interactive and incremental way, instead of, doing a parallel with software engineering, with the waterfall methodology.

Publising the research incrementally allows more people to get involved, create more branches, and spot errors earlier.

In sofware developement, the change of the process lead to huge advanvements. I believe this is also true for academia.

randomsearch wrote at 2021-12-01 09:51:21:

> It means they are unquestionably pushing the state of art fowards. Or the system is completely broken.

I agree with this statement: the system is completely broken.

I've peered inside top level conferences and I can tell you that publishing there does not mean you are pushing the state of the art. Instead you may be: popular, good at playing the political game, lucky (when acceptance rates are low, luck plays a huge part - and why they are low is a story in itself), good at writing marketing copy rather than doing science, engaging in corruption to get your paper accepted, or playing to the gallery in a way that might even harm science.

More widely, if your field is 99% politicians and 1% scientists, then the _scientific_ barrier to entry at an elite conference is not going to be a good endorsement of your work.

boibombeiro wrote at 2021-12-01 11:26:08:

I don't think you are completly wrong.

For instance, last year there was a whole debacle at one of those top conferences because someone from nvidia, that was in the board of this conference, made (and posted on twitter!) a black list for people who said something that was not alligned with hers ideology. (I wonder what happened after the whole thing went quiet).

Yet, I do still think those conference, and their sponsors, have interest in only selecting the papers based on their content. I don't think you can get them to publish your papers so many time only with influence. Also, things like blind peer review exists to mitigate that (although, it favors writing quality over content).

Also. Politics isn't intrinsically evil. One of the great success of Einstein was spreading his idea. Argbly, others made more important achievement, yet didn't amass the same level of fame as him.

In my experience, people who are good at politics, usually persue others path. Doing so only to get publication is not the most rewarding thing for those kinda of people.

krageon wrote at 2021-12-01 15:26:59:

What exactly was the sort of stuff people are blacklisted for? It could be claiming that racial segregation is a great thing (which should be shunned and kept away) or it could be supporting multivitamin consumption (for which it seems like an overreach).

boibombeiro wrote at 2021-12-02 20:46:50:

https://news.ycombinator.com/item?id=25597008

The twitter thread

https://web.archive.org/web/20201215001433/https://twitter.c...

krageon wrote at 2021-12-03 10:09:35:

I cannot find anything on what was done except shunning neonazis. Is that the extent?

derbOac wrote at 2021-12-01 09:52:08:

The problem isn't the incremental nature of the papers or the collaboration, it's the equating of scientist quality with numbers of publications (or something like it).

By your own analogy, it would be like saying "someone is a great computer scientist" based on the number of commits they _or their colleagues_ make.

I might even argue that regardless of how the product is or is not improved by the process, the way we attribute credit is worse.

If you move to a collaborative incremental process but still talk in terms of specific individuals as the source, rather than a group, there's a problem.

op00to wrote at 2021-12-01 11:13:50:

The comment wasn’t that they had a lot of papers, but they had a lot of papers in influential conferences.

shrimpx wrote at 2021-12-02 19:37:07:

I agree partially, but SIGGRAPH and CVPR are highly regarded and aren't known to accept crappy work. There are some "soft" sub-fields of CS that can be a wash, like HCI and Software Engineering. Those fields have relied more and more on "marketability" of ideas instead of depth of technical contribution. But top conferences in fields like graphics and programming languages (POPL, PLDI) are no joke, you will not be able to get a paper that isn't top-notch into those confs.

randomsearch wrote at 2021-12-03 00:06:13:

Don’t want to talk about specific conferences or individuals but I agree that some conferences (really, a handful) have far less of this problem than others.

guerrilla wrote at 2021-12-01 09:14:04:

Those weren't just papers published but papers published at elite conferences. Each one must be a serious contribution.

randomsearch wrote at 2021-12-01 09:52:48:

> Each one must be a serious contribution

I'm sorry to inform you that this is untrue. Elite conferences are full of work that is _scientifically_ awful and does not contribute anything to _science_.

op00to wrote at 2021-12-01 11:16:05:

Hello, this is the wild comment police. Please provide evidence that Elite conferences are “full of work that is scientifically awful”.

Full meaning significantly most of the work is of no value?

License, registration, and proof of your statement please.

choeger wrote at 2021-12-01 10:23:23:

That's not true in general.

Case in point:

https://icfp21.sigplan.org/track/icfp-2021-papers#event-over...

the key is that "elite" in science means "small", amongst other things. Any conference that accepts so many papers that not very attendant can visit every presentation is already not "elite" anymore.

randomsearch wrote at 2021-12-02 10:34:11:

You've chosen a good conference for a counter-example, I agree.

But even within that conference, there are examples of exactly what I'm talking about. I'm not going to personally criticise individuals as it's a widespread and often systemic problem. Peter Thiel has summarised this better than I ever could.

sytelus wrote at 2021-12-01 03:59:12:

This usually implies he has great funding sources and very large lab. Being a "powerhouse" as a researcher is probably a different thing.

oceanghost wrote at 2021-12-01 04:03:08:

I know nothing about this person, but if his case is the average, he's probably got a lot of grad students doing the actual work.

I can't tell you how many papers/patents I wrote only to be listed as the 5th contributor or some such thing.

voxl wrote at 2021-12-01 07:08:58:

https://light.princeton.edu/team/

Visiting professor, 3 postdocs, and 9 PhD students, all of very high quality (because, well they're at Princeton).

My guess is each postdoc could produce 1 to 2 papers a year on there own. A PhD student at such a difficult school can probably produce one paper a year once they are in there third year.

14 publications in 2021. Let's guess 6 of which are from postdocs, we'll say 4 of which are from PhD students, leaving us with 4 publications between both professors, or two from each. Of course, this is all very rough estimation.

shrimpx wrote at 2021-12-01 07:51:46:

The way it likely works is the prof drives the vast majority of idea/IP development and the students do the work under the prof's guidance. Then the prof "frames" the paper and does part of the writing, especially in intro and conclusion sections. These profs likely don't drive any solo projects and don't do any prof-prof collaborations without students involved.

derbOac wrote at 2021-12-01 09:41:44:

The problem is we will never know what really happened.

This model leads to a necessary accrual of credit to more senior researchers because the default assumption is to assume credit of a group to the more senior individual.

The problem is there's no way (or less way) for the less senior individual to establish credit unless they are publishing on their own, which isn't feasible.

In this way, group efforts tend to trickle toward the person in the senior position, and credit accrues to people in management of large groups irrespective of where ideas are coming from or who is doing the work. We say "someone is a great scientist" because they have an administrative position with reference to a large group.

Whoever has the most papers with the largest groups wins.

rudyfink wrote at 2021-12-01 05:25:11:

I'd love to see a paper looking at this (e.g., how often is the "primary" author the last author or is the last author a better predictor of quality than the first, if you look back historically). I suspect this is very widespread issue.

tokai wrote at 2021-12-01 09:06:53:

https://www.sciencedirect.com/science/article/pii/S175115771...

detaro wrote at 2021-12-01 07:59:15:

If you look at the linked list of papers, the prof usually appears at the end of the list. That's common in computing fields.

sjtindell wrote at 2021-12-01 04:00:10:

Are we seeing more people on this path where you get tenure (has to be one of the most secure jobs possible) and then launch companies from that foundation? It meshes so well, incredibly competitive though. Perhaps I’m just noticing more.

shrimpx wrote at 2021-12-01 06:38:11:

It sounds like in the case of Felix, he started his company while he was a Ph.D. student.

alexpotato wrote at 2021-12-01 13:17:52:

I remember years ago watching a show on potential future technologies and one of the people being interviewed mentioned the following:

"One day, your child will be able to go to a dollar store and buy sheet of sticker cameras. These cameras will be paper thin, can be stuck anywhere and will have their own internet address that can be accessed over the public internet from a browser. Everything I just described is technically possible now, it's just not been put together and some of the components aren't cheap enough. As a society, we should be prepared for the ramifications of this."

Research and advancements like this make the above feel like it's getting closer and closer to the above.

curryst wrote at 2021-12-01 13:37:00:

An ESP32-CAM isn't far off. It's got a sticker sized footprint already. If you cut off the pins, the PCB is thicker than a sticker and the camera is a centimeter or two long, but they're like $10 a piece for ones with a camera and an SD slot.

LargoLasskhyfv wrote at 2021-12-01 16:23:40:

There are so many microcontrollers which could be packed behind this, in a sandwiched way, behind the CCD which captures the light. Think of the stuff that's on your Smart-/Chipcards, like Credit/Debit/Banking/ID/Drivers License/...

The thing under the golden metal contact plate has a diameter of about 5 millimeters, yet it is able to perform a subset of something like Java.

The same goes for the SIM in your phone, or a Micro-SD Card.

There the Microcontroller serves the interface on one end, and acts as a RAID-controller for the arrays of NAND-storage on the other end.

This can all be hidden behind that, layered underneath each other.

The biggest thing would be battery to power it, or the coils to harvest energy from WIFI, or whatever.

Combine that with circuitry which only wakes up the system when the image is changing, and your're good to go, ...err spy ;->

dtgriscom wrote at 2021-12-01 03:11:52:

The article keeps saying "camera", but we only see the lens. How big is the sensor and cabling?

alted wrote at 2021-12-01 04:40:06:

The full setup is in the supplementary information (available at the bottom of the main paper website [1]), Figure 7. After this small lens, there are a couple more large lenses before the final camera/sensor, apparently an AVT Proscilica GT1930C (which is not tiny---the full setup would be maybe ~200mm in length).

So basically, yep, this work is just about a better tiny lens; the press article is misleading (the paper [1] is better written). I don't know enough about optics to comment on how it compares to previous work, or how small a full lens + sensor system can currently be.

[1]

https://doi.org/10.1038/s41467-021-26443-0

ISL wrote at 2021-12-01 10:35:46:

Fig. 7 is definitely where it's at -- I had the same question, as a camera is not just a lens, but a _camera obscura_[1], both the lens and the 'dark box' within which an image is formed.

My impression is that their metasurface lens has approximately a 1 mm focal-length. In the diagram, the light appears to pass through an intermediate focus at that distance, which they then pick up with another optical system for control/convenience.

I don't see anything that would, in principle, prevent using the metasurface lens as a component of a tiny camera. Nature has certainly evolved imaging systems that are smaller and modern sensor technology permits wavelength-scale pixels. With maximum charity, it appears that this lens/methodology might be used to yield a ~1-2 Mpx imaging system with a 1 mm f/2 optical system.

The claimed advance here appears to be that the combination of their lens and processing yields higher quality imaging at these physical scales than prior work.

It does indeed appear that in combination with a tiny imaging sensor, this lens _could_ yield a camera the size of a large salt-grain.

This entire field of engineering appears to be in a golden decade; it may not be too long before we start seeing metasurface optics in consumer imaging products.

[1]

https://www.etymonline.com/word/camera

cvakang wrote at 2021-12-01 03:35:35:

Yes, I had the same question. But if the lens miniaturised so does the rest. Imagine swallowing this and getting picturs of your inside of body in a smart phone.

brazzy wrote at 2021-12-01 11:22:41:

> But if the lens miniaturised so does the rest.

Not necessarily. I recently looked into miniature cameras for self- built drones.

For less than $50, you can get a package with a lens that's maybe 2-3mm across. The entire sensor+lens unit is roughly 1cm cubed. But then you need the electronics to process and store the image, and you're up to credit card size. And then you need a battery, and not a tiny one because that image processing draws quite a bit of juice.

euler2100 wrote at 2021-12-01 23:22:32:

Mind sharing the link for such small camera sensors and lenses?

brazzy wrote at 2021-12-02 11:48:04:

Here's one example:

https://www.aliexpress.com/item/4001057625611.html

nine_k wrote at 2021-12-01 08:17:39:

There is a diffraction limit, which is not very small for visible light. Making a pixel grid that would resolve very tiny details may be hard.

OTOH the light sensor needs not be as small. Imagine a thin optical fiber with such a lens on one end, and a large lens and large enough camera on the other end. In could immediately make a better and less invasive endoscope, e.g. for surgery.

codingdave wrote at 2021-12-01 10:51:17:

That already exists - the term is "capsule endoscopy" if you want to find out more. But -- it is a big pill to swallow. A smaller and better camera could both make the experience better for the patient and improve results, so this is still a good step to improvements.

xwdv wrote at 2021-12-01 02:22:55:

Could two of these cameras create stereoscopic videos that can let us feel what it would be like to be so tiny?

FredPret wrote at 2021-12-01 03:30:14:

Conversely, imagine dual orbiting telescopes feeding into VR glasses giving us the perspective of a planet-scaled monster

drdeca wrote at 2021-12-01 04:56:27:

relevant xkcd :

https://xkcd.com/941/

bee_rider wrote at 2021-12-01 05:47:05:

That was a while ago. Somebody should rig up a pair of drones to a VR headset. Although, the head tracking latency might not work out...

fraiz wrote at 2021-12-01 03:49:10:

Project website is here:

https://light.princeton.edu/neural-nano-optics/

kingcharles wrote at 2021-12-01 04:13:34:

The sample photos are impressive.

fingerlocks wrote at 2021-12-01 07:42:05:

Imagine a matrix of tightly-packed pixel sized micro-cameras opposite of a similarly sized LED screen. You could project the camera image on the screen and make the entire object appear roughly invisible.

If we had a full body suit made of tiny cameras behind translucent LEDs, could we have invisibility/cloaking suits? Is this remotely feasible with current technology? Serious question.

ei8htyfi5e wrote at 2021-12-01 07:54:48:

For invisibility to work like that for more than one viewing angle you would need to add the initial number of pixels for each angle you want to support. So it’s unlikely to work even at small sizes. So 10 angles, you’d need 10x the original number of pixels.

fingerlocks wrote at 2021-12-01 08:13:03:

With facial detection and one observer, we could orient our projection as the viewing angle changes. It wouldn’t be true invisibility, but maybe at best a Predator-style fuzzy cloak?

nine_k wrote at 2021-12-01 08:13:16:

This works for very narrow FOVs, though.

E.g. a stealth aircraft can absorb the radar signal, analyze its direction (using a phased antenna array) and send back a "reflection from earth" with a proper delay. This renders the aircraft effectively invisible to the radar, provided that the covering of the aircraft indeed does not reflect much back.

oblio wrote at 2021-12-01 08:47:24:

And even if we somehow made the physics work for the actual displays, I can't even fathom how much computing power you'd need to push all those pixels around to make it look realistic.

Cyclical wrote at 2021-12-01 03:44:15:

Very cool! Fantastic that they were able to do this in silicon nitride, as that'll make commercialization and scalability much easier. Nitride is one of the more common MEMS manufacturing materials, and from the (admittedly tiny, which makes sense since they're trying to commercialize) fabrication section of the paper it seems like they're using a standard RIE process to manufacture the optical posts. I'm interested to know how they controlled for the isotropy of the plasma process without using DRIE, which is frequently used for these kinds of high aspect-ratio devices.

JoeAltmaier wrote at 2021-12-01 03:26:05:

So privacy will evaporate. Imagine dropping millions of these from a drone, carpeting a city. They'd be carried everywhere on clothing, animals, wind. Go to any location and scan for local cameras, download.

Privacy will have to be redefined. We've done this lots of times. You hear what goes on behind the bathroom door, you don't say anything, you actually forget it. Because it's not polite to snoop.

So in future even if you see something you shouldn't on camera, it'd be boorish to mention it.

That doesn't solve the police state of course.

xijingpooh wrote at 2021-12-01 06:28:38:

They still need means to store or transmit recorded data, a power source. So not that dystopian, but some hotels and airbnbs will certainly be bugged with such tech. You won't see it if it embedded into a wall or a framed picture. Also it's probably quite useful in the automotive sector as well.

zuminator wrote at 2021-12-01 04:18:20:

This is just the camera mechanism itself, not a recording or transmitting device nor, as sister comment points out, a power source. So, a few more steps to smart dust.

perl4ever wrote at 2021-12-01 05:26:08:

>Imagine dropping millions of these from a drone, carpeting a city

This was the end of a science fiction novel from 1972, albeit the technology was an imaginary substance called "slow glass":

https://www.goodreads.com/book/show/939190.Other_Days_Other_...

bobthechef wrote at 2021-12-01 03:53:33:

Counter snooping measures will likely be developed in parallel. Maybe localized EMP or something that floods sensors with infrared light or whatever.

LargoLasskhyfv wrote at 2021-12-01 15:57:00:

That would be one big RED ALERT halo, if they were watched by someone in realtime. Or some system. BEEP! BEEP! BEEP! _Potential Perp_ entered perimeter!

Beep! Beep! Beep!

dataflow wrote at 2021-12-01 03:34:37:

> Imagine dropping millions of these from a drone, carpeting a city.

Doesn't it still need a source of power?

roywiggins wrote at 2021-12-01 03:47:08:

There's at least some power to be pulled from ambient radio waves. No idea if it would be enough to power these things.

https://phys.org/news/2017-08-ambient-energy-power-internet....

Or combine it with photovoltaics:

https://www.smart2zero.com/news/self-powered-camera-sensor-c...

JoeAltmaier wrote at 2021-12-01 04:10:38:

RFID tags have no power source, and can do significant processing. As others mention, ambient electromagnetic fields can be harvested with a loop antenna.

fallat wrote at 2021-12-01 04:55:22:

What processing can an RFID do?

AFAIK RFID can only store and send data, and are pretty "dumb".

anaganisk wrote at 2021-12-01 06:01:58:

I remember watching a docu in Netflix about spies where they show a deice that could record and transmit sound, using same technology.

dredmorbius wrote at 2021-12-01 16:51:47:

The "Chrysostom" Great Seal Bug, 1945, USSR, created by LĂ©on Theremin, who also invented the early electronic musical instrument bearing his name. The bug remained installed in the US Embassador's office for 15 years, though it was recognised as a listening device in 1952.

https://hackaday.com/2015/12/08/theremins-bug/

https://www.algora.com/Algora_blog/2021/02/07/the-great-seal...

xijingpooh wrote at 2021-12-01 06:34:38:

That's how the Soviets bugged the US embassy in Moscow. With a resonant circuit w/o a power source and a reflected RF beam.

fallat wrote at 2021-12-01 13:20:46:

It's not processing though? It's acting like a mirror or a polaroid? Do those process?...

_jal wrote at 2021-12-01 05:49:20:

For instance,

https://www.usenix.org/legacy/events/nsdi11/tech/full_papers...

trompetenaccoun wrote at 2021-12-01 06:42:05:

It's theoretically possible but they'd have to put up antennas all over our cities, emitting ultra high frequency signals. The distance this would work over would be very short as well.

wiml wrote at 2021-12-01 05:50:20:

This always makes me think I should go back and reread Brin's _The Transparent Society_ and see how it's held up to time. It seems pretty prescient from what I can remember.

shahar2k wrote at 2021-12-01 05:59:31:

this is fun stuff, my uncle did quite a bit of development on micro cameras

he worked on and made quite a bit on the patent for the first ingestible pill camera, and had some even smaller cameras for other medical devices that were even smaller (and fun to play with when I've visited a few years back)

I also bet many of these lenses can be stacked together to create a nice lightfield display...

bitwize wrote at 2021-12-01 02:36:15:

Localizers are a thing now.

cardamomo wrote at 2021-12-01 02:52:03:

I had the same thought! Now, can they create a mesh network and operate as a distributed processor?

SECProto wrote at 2021-12-01 03:18:50:

And perhaps most importantly, do they have a low enough weight-surface area ratio that they float, and low enough power requirements to be powered by almost indetectable microwave pulses

Gravityloss wrote at 2021-12-01 09:41:53:

Is the neural network fixed, ie the same for different subjects? Ie how much of the image is filled in by it? I would like to see multiple photos of different subjects to gauge how much is just generated detail by the network.

f3rnando wrote at 2021-12-01 03:43:58:

13 days ago:

https://news.ycombinator.com/item?id=29255511

("How to grow sodium chloride crystals at home")

legostormtroopr wrote at 2021-12-01 04:16:27:

How is this relevant to the conversation? Did you see salt grain and assume it was a dupe?

jhgb wrote at 2021-12-01 11:54:45:

> How is this relevant to the conversation?

Presumably in the "how large do you want the salt grain to be?" sense?

scollet wrote at 2021-12-01 05:16:19:

How do they taste?

kumarvvr wrote at 2021-12-01 05:20:24:

So, this lens is using machine learning to understand the behavior of an individual lens, using supervised training, and then that ML model is used to decipher all other input images?

Did I understand that right?

mnw21cam wrote at 2021-12-01 08:58:50:

I thought metasurfaces were limited to handling a single wavelength, but this claims to be capable of full colour images. Anyone have a handle on how that works?

mgdev wrote at 2021-12-01 16:07:59:

Can the optics be used in reverse, as a short-range projector? Very interesting AR applications if so...

14 wrote at 2021-12-01 02:38:17:

Incredible. I guess the first use I thought of was swallowing the camera to identify bowel issues. Can’t wait to see what they come up with.

kkaranth wrote at 2021-12-01 02:43:35:

That's mentioned in the article

> Enabled by a joint design of the camera’s hardware and computational processing, the system could enable minimally invasive endoscopy with medical robots to diagnose and treat diseases, and improve imaging for other robots with size and weight constraints. Arrays of thousands of such cameras could be used for full-scene sensing, turning surfaces into cameras.

There are existing swallowable capsule endoscopy devices like the PillCam[1], the entirety of which is 11mm x 26mm[2]

[1]

https://www.medtronic.com/covidien/en-us/products/capsule-en...

[2]

https://www.hopkinsmedicine.org/johns_hopkins_bayview/_docs/...

daveguy wrote at 2021-12-01 02:47:59:

If it could see through feces, that would be awesome. The worst part, by far, of a colonoscopy is the prep. And the prep is needed for a clean view. A smaller camera isn't going to help with cleaning things up. Improving the optical clarity of existing small-scope procedures (like cardioscope) would probably be the first application.

inter_netuser wrote at 2021-12-01 03:09:00:

what prep did you use?

Afaik, GIs much rather prefer to use the scope, and would only use pillcam if everything but the small intestine hasn't been ruled out.

trompetenaccoun wrote at 2021-12-01 06:53:35:

It will be used to spy on us.

dookahku wrote at 2021-12-01 05:45:54:

In the future, everyone will be nude on the internet for 15 minutes

interfixus wrote at 2021-12-01 05:06:39:

Completely unsurprising of course, but still the second-scariest thing I've read this morning and actually for quite a while.

The scariest is sophisticated, well-informed and higly intelligent HN commenters in this thread who seem to be bubbling over with unbridled enthusiasm.

fitzn wrote at 2021-12-01 05:40:58:

This is wild.

williamtrask wrote at 2021-12-01 02:33:19:

…something something privacy…

inter_netuser wrote at 2021-12-01 03:10:21:

even if banned legally, the amounts of illegal surveillance will be mindboggling.

It'll only keep shrinking, until it is the size of dust particles, just another order of magnitude.

How do you prevent dust-cameras from being around? a pre-emptive EMP pulse lol?

f3rnando wrote at 2021-12-01 04:03:17:

You are overthinking here:

1- Actually, the main problem with privacy is that most people still doesnt really care and many more underestimates its current reach, if this were somehow to happen, proably it would raise awareness and that is of topmost priority.

2- Its is already mindboggling.

3- A spy needs privacy to spy.

Gigachad wrote at 2021-12-01 05:18:14:

It goes further than this. Most people know about Airbnb creep cams. Everyone cares about it when asked. And yet they are still a huge problem because they are so easy to hide.

f3rnando wrote at 2021-12-01 04:06:21:

No to mention that it would salt the earth :P

hellbannedguy wrote at 2021-12-01 03:04:12:

I've given up on privacy. Cameras will be watching our every step. The only positive is it might make cops/feds more honest if they know they are being recorded at every step?

Krisjohn wrote at 2021-12-01 05:10:00:

... and use it to photograph a "close ad" button.