💾 Archived View for dioskouroi.xyz › thread › 29403240 captured on 2021-12-03 at 14:04:38. Gemini links have been rewritten to link to archived content

View Raw

More Information

-=-=-=-=-=-=-

Doctor Web discovered vulnerabilities in children’s smart watches

Author: nazgulsenpai

Score: 61

Comments: 18

Date: 2021-12-01 12:59:22

Web Link

________________________________________________________________________________

fsflover wrote at 2021-12-01 15:06:49:

More people should consider open-source smart watches PineTime:

https://www.pine64.org/pinetime/

.

ipv6ipv4 wrote at 2021-12-01 16:05:24:

It doesn’t have cellular capability or even WiFi support. It’s not remotely comparable to the watches in the article.

paulcarroty wrote at 2021-12-01 18:55:57:

Bluetooth 5, guess you can share the internet through.

zo1 wrote at 2021-12-01 18:07:25:

Anyone have any suggestions that do?

tata71 wrote at 2021-12-01 15:28:03:

Can anything like this run AsteroidOS yet?

https://asteroidos.org/install/

fsflover wrote at 2021-12-01 15:45:13:

No, but it runs many other things:

https://wiki.pine64.org/wiki/PineTime_Development

.

hvgk wrote at 2021-12-01 16:02:40:

Moral of the story: don’t buy any old smart garbage off the internet. It probably doesn’t have your best interests at heart.

antiterra wrote at 2021-12-02 01:02:49:

The article also suggests that, even among products that are in no way nefarious, you should consider the geopolitics of the device’s server hosting location.

Aromasin wrote at 2021-12-01 16:08:00:

EDIT: Someone deleted the comment I was replying to, but I've put enough time into writing it that I'd like to post it as a separate one anyway. The original person, in short, was discussing how disgusting it is for companies to intentionally build ad-tech into their products like this and this was my reply.

--------------------

"we should make smart watches for kids, and then sell the data to ad-tech companies, or use it to profile children so that we can make predictions about peoples adolescent and adult behavior based on categories of childhood behaviors that emerge from the data, and then leverage that data to charge them more for other products later in life."

I don't think you're wrong, but I do think (as someone who engineers IoT products for a living) that it's often the nature of large corporations putting employees into silo's that enable things like this, not necessarily that most people are morally bankrupt. It's more to do with ignorance and negligence.

I might design a device, and extract data from it to give back to the user to help them make informed decisions about their health (heart rate, sleep health, step tracking, device time usage etc.) All useful information to have that can arguably help make their life better. At no point do I want this data to be used for anything insidious - it's against my own principles, and according to the constant bombardment I get during corporate culture trainings, against theirs too.

Using the same tools that I created, a data scientist in the company takes this raw information and performs some analytics. They track how long the batteries last, what the network coverage is like, what locations these devices are used in etc. They use this data to produce actionable insights, in order to help make future products better and give the engineers some points to draw from, as well as to know what people are using it for. Again, they're not doing anything worrying with this information; they're simply trying to help improve visibility of the data with some nice graphs and such, ready to make the next product more efficient, reliable or affordable.

Here you reach the business folks. You're now a few degrees of separation from the people who made the product. The business people are looking for any way they can monetize the assets that they have. They find that they're sat on a mountain of data, which while intended to improve the lives of the people it was taken from or used to improve the next generation of product, has the potential for much more value beyond that. Some external customers are offering to pay a lot of money for this data, with the surety that it'll be used to help other companies design other great and valuable products. This is where I say the negligence comes in. All it takes is someone not doing their due diligence, and this data makes it's way to some scummy ad-tech company who are directly in the business of manipulation-for-profit.

I'm not trying to divert responsibility from any party here. I constantly question whether I'm in the right industry and should stop making these products as they have such potential for harm, but equally I think it can and has been a force for good (be it for energy saving, medical usage, industry optimization, or early disaster warning) and maybe following Luddite philosophy would equally be the wrong thing to do. It's a debate I have internally almost every day. I think we're moving in the right direction, using edge insights to perform all the privacy essential processing on the device instead of the cloud, and implementing new standards on how we store and use peoples data, but all it takes is a situation like the above for it to end up in the wrong hands.

dhimes wrote at 2021-12-01 17:38:21:

I'm not sure it's necessarily negligence. I can also see the scenario where they feel forced to monetize the data if their competitors do. All it takes is one morally bankrupt decision (or negligent, if we're being charitable) to start a race to the bottom.

Any push-back will be met with "well, it's not illegal, our competitors are doing it, and we're obligated to do what's best for our shareholders."

That's why I'm in favor of regulations/laws to stop this crap. Good limits on behavior help honest business people.

Aromasin wrote at 2021-12-02 19:41:09:

I'd argue that "feeling forced" because of competitors behavior is no excuse at all.

I work in a field where facial recognition is rife. Despite this, our company has a no tolerance policy for such technology. We feel the holding such beliefs makes not only common sense, but business sense. The best and most intelligent people I believe are also more often than not good people, and good people don't want to work for an evil company. If you want the best and brightest, then have policy that attracts them, and you'll succeed regardless.

To add, regulations/laws are being made by governments such that this data must be available to them to "protect the people". We can't rely on them to solves this. You only have to take a look at China and Australia to see where this is going. If the government isn't going to fix the problem in race to the bottom, then we as the authors of such technologies should try our best in their stead.

I'm not going to jump off a bridge because others do, and it shouldn't be acceptable for businesses to monetize peoples personal data just because competitors do. I hope that with the advent of ethical consumerism, companies that take the morale high ground on issues like this succeed in spite of the monetary downsides, and educated consumers will pay the premium/make the switch to platforms that do so. It's already evident with the mass migration away from Facebook as a platform, following the Cambridge Analytica scandal, and the now regular use of VPN's by many people. I hope such momentum keeps up.

Onetanit wrote at 2021-12-01 14:29:54:

It's a shame to hear that. I never would have thought there would be such a problem with the watch. I recently won a good amount of money at the casino

https://freshcasinobonus.com/casino-bonuses/free-spins/

and just ordered a watch like this for my little daughter. Now I think to return it back, but what to choose to replace it has no options(

nanch wrote at 2021-12-01 14:31:32:

Spam

vorpalhex wrote at 2021-12-01 17:09:33:

If you click into the comment (click the time just past the name) you can "flag" it, which is the appropriate thing for spam. (Please don't flag comments you merely disagree with, keep it for actual spam or threats, etc).

Under your profile you can also enable "See dead" which will let you see flagged comments.

LukeShu wrote at 2021-12-01 17:58:45:

You can't flag a post if you don't have enough karma. What the threshold is gets tuned periodically, but I seem to remember that you get a lot of features at 500 karma; nanch currently has 409 karma, so I wouldn't be surprised if he can't flag it.

jerknextdoor wrote at 2021-12-01 19:15:16:

I only have 32 karma and I have the ability to flag.

nanch wrote at 2021-12-02 02:43:31:

Thank you, I am able to flag and will do so in the future. I was not aware. Thank you.

tata71 wrote at 2021-12-01 15:01:55:

Hey, whatever happened to tarbackup?

nanch wrote at 2021-12-02 02:41:17:

Thank you for asking. The valuable feedback from the community was deeply fulfilling.

I didn't have the time to maintain the project, so I needed to shut it down.

I want to bring it back.