💾 Archived View for dioskouroi.xyz › thread › 29403240 captured on 2021-12-04 at 18:04:22. Gemini links have been rewritten to link to archived content
⬅️ Previous capture (2021-12-03)
-=-=-=-=-=-=-
________________________________________________________________________________
More people should consider open-source smart watches PineTime:
https://www.pine64.org/pinetime/
.
It doesn’t have cellular capability or even WiFi support. It’s not remotely comparable to the watches in the article.
Bluetooth 5, guess you can share the internet through.
Anyone have any suggestions that do?
Can anything like this run AsteroidOS yet?
https://asteroidos.org/install/
No, but it runs many other things:
https://wiki.pine64.org/wiki/PineTime_Development
.
Moral of the story: don’t buy any old smart garbage off the internet. It probably doesn’t have your best interests at heart.
The article also suggests that, even among products that are in no way nefarious, you should consider the geopolitics of the device’s server hosting location.
EDIT: Someone deleted the comment I was replying to, but I've put enough time into writing it that I'd like to post it as a separate one anyway. The original person, in short, was discussing how disgusting it is for companies to intentionally build ad-tech into their products like this and this was my reply.
--------------------
"we should make smart watches for kids, and then sell the data to ad-tech companies, or use it to profile children so that we can make predictions about peoples adolescent and adult behavior based on categories of childhood behaviors that emerge from the data, and then leverage that data to charge them more for other products later in life."
I don't think you're wrong, but I do think (as someone who engineers IoT products for a living) that it's often the nature of large corporations putting employees into silo's that enable things like this, not necessarily that most people are morally bankrupt. It's more to do with ignorance and negligence.
I might design a device, and extract data from it to give back to the user to help them make informed decisions about their health (heart rate, sleep health, step tracking, device time usage etc.) All useful information to have that can arguably help make their life better. At no point do I want this data to be used for anything insidious - it's against my own principles, and according to the constant bombardment I get during corporate culture trainings, against theirs too.
Using the same tools that I created, a data scientist in the company takes this raw information and performs some analytics. They track how long the batteries last, what the network coverage is like, what locations these devices are used in etc. They use this data to produce actionable insights, in order to help make future products better and give the engineers some points to draw from, as well as to know what people are using it for. Again, they're not doing anything worrying with this information; they're simply trying to help improve visibility of the data with some nice graphs and such, ready to make the next product more efficient, reliable or affordable.
Here you reach the business folks. You're now a few degrees of separation from the people who made the product. The business people are looking for any way they can monetize the assets that they have. They find that they're sat on a mountain of data, which while intended to improve the lives of the people it was taken from or used to improve the next generation of product, has the potential for much more value beyond that. Some external customers are offering to pay a lot of money for this data, with the surety that it'll be used to help other companies design other great and valuable products. This is where I say the negligence comes in. All it takes is someone not doing their due diligence, and this data makes it's way to some scummy ad-tech company who are directly in the business of manipulation-for-profit.
I'm not trying to divert responsibility from any party here. I constantly question whether I'm in the right industry and should stop making these products as they have such potential for harm, but equally I think it can and has been a force for good (be it for energy saving, medical usage, industry optimization, or early disaster warning) and maybe following Luddite philosophy would equally be the wrong thing to do. It's a debate I have internally almost every day. I think we're moving in the right direction, using edge insights to perform all the privacy essential processing on the device instead of the cloud, and implementing new standards on how we store and use peoples data, but all it takes is a situation like the above for it to end up in the wrong hands.
I'm not sure it's necessarily negligence. I can also see the scenario where they feel forced to monetize the data if their competitors do. All it takes is one morally bankrupt decision (or negligent, if we're being charitable) to start a race to the bottom.
Any push-back will be met with "well, it's not illegal, our competitors are doing it, and we're obligated to do what's best for our shareholders."
That's why I'm in favor of regulations/laws to stop this crap. Good limits on behavior help honest business people.
I'd argue that "feeling forced" because of competitors behavior is no excuse at all.
I work in a field where facial recognition is rife. Despite this, our company has a no tolerance policy for such technology. We feel the holding such beliefs makes not only common sense, but business sense. The best and most intelligent people I believe are also more often than not good people, and good people don't want to work for an evil company. If you want the best and brightest, then have policy that attracts them, and you'll succeed regardless.
To add, regulations/laws are being made by governments such that this data must be available to them to "protect the people". We can't rely on them to solves this. You only have to take a look at China and Australia to see where this is going. If the government isn't going to fix the problem in race to the bottom, then we as the authors of such technologies should try our best in their stead.
I'm not going to jump off a bridge because others do, and it shouldn't be acceptable for businesses to monetize peoples personal data just because competitors do. I hope that with the advent of ethical consumerism, companies that take the morale high ground on issues like this succeed in spite of the monetary downsides, and educated consumers will pay the premium/make the switch to platforms that do so. It's already evident with the mass migration away from Facebook as a platform, following the Cambridge Analytica scandal, and the now regular use of VPN's by many people. I hope such momentum keeps up.
It's a shame to hear that. I never would have thought there would be such a problem with the watch. I recently won a good amount of money at the casino
https://freshcasinobonus.com/casino-bonuses/free-spins/
and just ordered a watch like this for my little daughter. Now I think to return it back, but what to choose to replace it has no options(
Spam
If you click into the comment (click the time just past the name) you can "flag" it, which is the appropriate thing for spam. (Please don't flag comments you merely disagree with, keep it for actual spam or threats, etc).
Under your profile you can also enable "See dead" which will let you see flagged comments.
You can't flag a post if you don't have enough karma. What the threshold is gets tuned periodically, but I seem to remember that you get a lot of features at 500 karma; nanch currently has 409 karma, so I wouldn't be surprised if he can't flag it.
I only have 32 karma and I have the ability to flag.
Thank you, I am able to flag and will do so in the future. I was not aware. Thank you.
Hey, whatever happened to tarbackup?
Thank you for asking. The valuable feedback from the community was deeply fulfilling.
I didn't have the time to maintain the project, so I needed to shut it down.
I want to bring it back.