đž Archived View for dioskouroi.xyz âş thread âş 29396643 captured on 2021-12-04 at 18:04:22. Gemini links have been rewritten to link to archived content
âŹ ď¸ Previous capture (2021-11-30)
âĄď¸ Next capture (2021-12-05)
-=-=-=-=-=-=-
________________________________________________________________________________
Some FBI agents came to my house once and told me that my home Internet had been used to visit Islamic Extremist websites. They brought a local police office with them and a 'threat assessment' coordinator from my workplace. They asked me if my family was Muslim and wanted to know if we had been radicalized.
We are not religious (at all). We do not attend church, synagogue or mosque. We are lower middle class white Americans born and raised in the USA and have never traveled outside the country.
I have no idea why they thought this about us. Maybe it was an IP mix-up, but it was very disturbing. I feared that I may lose my job. I became very afraid of the FBI that day. I think this could happen to anyone at anytime.
"threat assessment' coordinator from my workplace"
"I feared that I may lose my job."
I understand that police/FBI have to conduct investigation. What dont understand is involvement of the employer , it's extremely disturbing - you have not been convincted, you have not been charged, you are not even a suspect or accused of anything at this point - how is your private life the business of your employer?
Why is your privacy being breached and livehood being placed at risk?
Surely the FBI is not allowed to publicise random dirt they find on innocent people?
The FBI still has buildings named after J Edgar Hoover. That should tell you everything you need to know about their institutional respect for justice and due process.
For non-americans, what was J Edgar Hoover known for?
I'm also not an American - but as far as I've read - massive abuse of power in using the FBI to spy on political rivals, illegal wiretapping, illegal surveillance of US congressmen and even presidents, running the FBI while they were doing extremely controversial programs like COINTELPRO and programs and investigations that tried to hinder the civil rights movement, etc.
As a non American, I think COINTELPRO is the single most anti-democratic abuse of power ever done by US government.
https://en.wikipedia.org/wiki/COINTELPRO#Range_of_targets
https://en.wikipedia.org/wiki/COINTELPRO#Alleged_methods
https://en.wikipedia.org/wiki/J._Edgar_Hoover#Investigation_...
https://en.wikipedia.org/wiki/J._Edgar_Hoover#Reaction_to_ci...
Turning the FBI into a blackmail operation.
Being in the pocket of the mob
Cross dressing
> _Surely the FBI is not allowed to publicise random dirt they find on innocent people?_
If they're doing an investigation, they very likely got the employer involved in order to get more information on the person they're investigating, and companies have liaisons for law enforcement, as well. If the FBI comes knocking and says, "we think you've hired a terrorist," it's going to ruffle some feathers at the company no matter how unfounded or untruthful the claim is.
It isn't just the suspicion of terrorism that might have law enforcement or the FBI knocking at an employer's door. If someone is suspected of any type of cyber crime, the FBI will be coming for all of their computers and electronic devices, including the ones they use at work.
"If they're doing an investigation, they very likely got the employer involved in order to get more information on the person they're investigating"
What is an employer going to contibute, realistically. "Oh yeah, he always carries potassium nitrate and makes explosions during lunch breaks!"
Depending on the company they would likely audit their activities incase the company itself was a vector, assuming that terrorists also require intelligence networks.
This is par for the course FBI intimidation tactics, along with interviewing everyone you've regularly conversed with. Serves a double purpose of investigation while simultaneously making you radioactive to be around.
Thereby isolating the person during a period of high emotional anxiety.
Employer might have been defense contractor. Most jobs without clearance don't even have "threat assessment coordinaror".
> Most jobs without clearance don't even have "threat assessment coordinaror"
The title may vary from place to place but all companies have people filling this role, even if you've never met them.
Normally falls somewhere under a team like Global Intelligence, Workplace Security, Business Continuity, etc.
No, most places do not have Global Intelligence, Workplace Security positions. Business Continuity is most often a IT business function ...
Companies that employ software engineers likely are divided into those that have that role and those that don't have it _yet_.
You deserve to be always assumed innocent until proven guilty, and you will have to be proven guilty to be found guilty, and realistically speaking, those premises are extremely technical.
You don't have to be found guilty to be punished, lookup "case load". That can keep you on probation and monitoring as long as they want to draw out the case and the whole time you are required to make monthly payments or risk going to jail.
In the US, the process IS the punishment.
One of the principle argument for the "speedy trial" clause of the US 6th Amendment, and similar rights in other jurisdictions.
Note that the US law does _not_ apply to noncriminal processes --- civil lawsuits or other elements of law.
How about a State felony case that has taken nearly two years?
How about it?
Without specifics, or some indication of who is triggering the delay (e.g., defendants may request delays), I couldn't possibly comment.
Given law and legal processes are not my baliwick, I'd probably not be able to comment intelligently regardless. But you've posed a null-content question.
The State Attorney General dragging the case out because they refuse to look at it. They also filed it under the wrong statue so their arguments are incorrect.
Seems possible grounds for a challenge. The entire case can be dismissed if the right is denied.
https://www.justia.com/criminal/procedure/right-to-a-speedy-...
https://www.nolo.com/legal-encyclopedia/the-right-speedy-tri...
Not during COVID times...
He is already being punished.
I'm reading a book where the main character receives a subpoena to go to a interview with the Portugal dictatorship political police. Nothing happens to him (till now) but everybody in the hotel where he is hosted starts to treat him differently.
Who will be the first in the line when a firing is necessary? Probably the guy that has problems with the FBI.
It's (scarily) interesting that they react with actual personal attendance based purely on a very limited set of electronic information.
From your further description:
> We are not religious (at all). We do not attend church, synagogue or mosque. We are lower middle class white Americans born and raised in the USA and have never traveled outside the country.
Would not the FBI have been able to any amount of background searching (read: further electronic information gathering), that would be less effort-intensive than getting arranging a 'threat assessment' coordinator from throw_away_dgs' actual workplace and a local police officer for an in-person door-knock. If such background checks were performed, then they either don't have much data or their threat weightings are set to red-scare levels of paranoia. Either way, it's scary.
Unless there's more to the story.
I think what he experienced is another manifestation of the same phenomena as zero-tolerance policies in schools; institutions ask their enforcers to suspend common sense and strictly enforce the letter of the law/guideline/etc, even in situations where any reasonable person would decide it made no sense. They do this because such common sense and gut feelings is how bias and prejudice might creep into their oh-so-perfect system.
It used to be that if a teacher saw a kid get bullied and then punch his bully back, the teacher was empowered to evaluate the situation using their best judgement, and punish the bully while congratulating the bullied kid who stuck up for himself. The system sees a problem with that; the teacher's perception of the incident might have bias and prejudice. The system's solution is to have zero tolerance for any violence and punish both students equally. The system's solution to the possibility of prejudice against one student is to ensure prejudice against _both_ students.
At my school it was worse than that. Any one "involved" in a physical altercation would be suspended. Someone could walk up and punch you and you would be suspended for it. This obviously had a chilling effect on reporting. No more bullying. Problem solved.
Such policies also justify and encourage excessive retribution. If youâre getting suspended whether you fight back or not, may as well cause some real damage to earn it.
> the teacher's perception of the incident might have bias and prejudice.
I mean that's not entirely wrong either. Bullying was still a thing before zero tolerance policies.
Not to say zero tolerance policies are the right solution, but personal bias _is_ a big problem when it comes to enforcement.
Of course. Bias and prejudice is always a real concern. In situations where the teacher gets it wrong and punishes the bullied kid, the kid learns an unfortunate but useful lesson; that some agents of the system cannot be relied on.
But the zero tolerance response to this circumstance ensures the bullied student is prejudiced against, judging him guilty before considering the facts of the individual circumstance. What does that teach the kid? That the _system itself_ cannot be relied on.
to be fair, that's a pretty valuable lesson to learn out here. it would be neat if we had a system we could rely on.
was about to comment the same thing. I teach future teachers, and I always say that_ everyone forgets their school math and chemistry lessons after cramming for the test. What sticks is learning how to survive in an unequal, dysfunctional system where you're the oppressed class, fighting among each other while you can't touch the people in power.
This is how 95% of the world works. In most countries, people are conditioned to "join" the rulers from a very young age, and people who use critical thinking are a tiny minority (often invisible)
Bullying is still a thing.
But they did not establish how legislation has elevated itself from that.
They are right that everyone is biased, what they completely fail to establish is how they improved their own perception. Actions justified because of the presence of bias and prejudice very closely mirror religious dogma by a more objective metric.
> It's (scarily) interesting that they react with actual personal attendance based purely on a very limited set of electronic information.
Either their intel is better than they let on and didnt think they would be walking into an ambush or they are more stupid than we think.
Actually, I think they had no intel. You NEED intel for a judge to order a subpoenaâand if a subpoena was issued, the ISP would open their firehose, and overwhelm the FBI with evidence suggesting that thereâs nothing to investigate. And having visited extremist sites a handful of timesâeven if advertantlyâis probably not going to meet the threshold for a subpoena.
If the FBI visited me and casually asked about my web history, I would casually ask them to pound sand (as should everyone!). But if the agent was accompanied with someone from my employer, I would eagerly cart up every single device in my home and offer to carry it out to their vehicle (as I fear most would).
It smells like someone is taking massive investigative shortcuts, at very significant cost to the accused. Then again, I canât even fathom the upside for the FBI.
My gut reaction is simply speed. Why sit at my desk for a few hours reading documents when I can a couple phone calls and be scary for 20 minutes to feel secure in saying âyep - not terroristsâ.
Or - you know - âweeeelp, Iâve been sitting at this desk all morning, letâs go talk to someoneâ.
As commenter below says: Power.
Why spend the extra time and effort, let's just hit the road and totally and completely fuck at least one citizen's opinion of the entire system upon which their life and livelihood depends.
Saves me a couple of hours, and the sun's out. Sold!
Ironically, maybe this will actually radicalise the people they're investigating for radicalisation.
> Then again, I canât even fathom the upside for the FBI.
The upside is power.
You yourself said as much: "If the agent was accompanied with someone from my employer, I would eagerly cart up every single device in my home and offer to carry it out to their vehicle."
You fear them. Rightly so. The FBI has incredible power, backed by the full might of corporate media. To cross them is to be crushed.
Why would they need a warrant, when Apple and Google climb over each other volunteer every scrap of your private information? Why take the time for a trial, when justice can more efficiently be served by both your employer _and_ your union gleefully ruining you financially upon request?
People have been demanding[1] this for years. Now it's here.
[1]
Apple have famously refused FBI requests.
https://en.wikipedia.org/wiki/FBI%E2%80%93Apple_encryption_d...
>_If such background checks were performed, then they either don't have much data or their threat weightings are set to red-scare levels of paranoia. Either way, it's scary._
They're not gonna have anything happen to them if they go tough on (and fuck over an) innocent guy.
They're gonna look bad if they miss a terrorist.
So they have no incentive to not have "red-scare levels of paranoia".
That's true, I still remember the fact that the Boston Bomber(s) were on international watch lists and their home countries warned the US (whichever TLA, may have been an issue of crossed wires) that these guys were on the move, and it was all ignored.
Now, visit a 'bad' website, or somehow be mistaken for someone that visited a 'bad' website, and you'll get some deep personal treatment.
Feds can't win, but it seems to be through their own laziness or incompetence or lack of interagency cooperation.
Or maybe because it's motives, and what level of capture they have over their 'customers'? Seems pretty simple to me. They have a monopoly of service and the only retribution people can take is political which means everything is done on appearance.
Imagine being a muslim in such case. Trying to convince them that this can be a mix-up ( which is possible easily) won't be successful.
Imagine being a muslim and e.g. having a kid visiting those sites.
Or just going there out of intellectual curiosity, like how a leftie might read Main Kampf to check what that shit is.
You can end up in a very bad position...
A couple years after 9/11, my father and I had donated to the Holy Land foundation.
The IRS proceeded to audit me (16 years old) and my $8k a year woodselling business I had with my dad. You tell me.
Iâll be the dissenting voice and say this reads like a âsow discord in the US 101â. Why on earth would the FBI bring both the police and a âthreat assessmentâ coordinator from your work to interview you? Why would your workplace ever agree to it? That screams lawsuit waiting to happen.
And on that note, why didnât you sue your workplace for harassment? Whether youâre religious or not isnât any of their business and is a protected class.
A decade ago the FBI harassed me at my home waking me up from sleeping twice and at a past employer before on entirely unfounded claims.
They didn't care what the consequences were for targeting someone innocent.
They also made nasty threats like "Someone has to go down for this, and if you help us collect intel on your industry peers we suspect then someone else can be that person"
I told them politely to go die in a fire because I was not about to help them harass other innocent people but it was terrifying none the less that they seemingly had the power to end my whole universe.
I became convinced through that ordeal that the FBI is a deeply corrupt organization that creates pressure to close cases by any means needed.
The OPs post seems totally believable and consistent with stories I have heard from others, particularly if they work for an organization that has the US government as a customer like a defense contractor.
> Someone has to go down for this
The so called "justice" system, I guess.
You're incredibly naive if you think this kind of stuff doesn't happen _all the time_ since 9/11. I personally know several people with similar stories in the US.
You know several people whose employers sent someone to their house with FBI agents to harass them about their religious beliefs?? And none of them sued?
Iâm not surprised at all that the FBI is harassing people, I find it incredibly hard to believe a private business would touch the situation with a 4,000 foot pole. They have absolutely nothing to gain and massive liability.
Is it prohibited to visit those websites? I once was interested to understand the way radicals think, to read about their arguments, so I spent some time hanging around some radical websites.
I was visited by the FBI for doing security research that made them at least pretend to assume I was a blackhat they wanted to take down.
Use Tor browser if you are going to research anything a criminal might regardless of pure motives.
If you so much as want to research lock picking, use Tor.
ISP traffic logs can and will be twisted against you in a court of law.
I openly participate in locksport communities and I haven't had any visits from the FBI.
I'm fairly confident that those agencies use context in an automated manner to get any meaningful results.
So "keyword" (could be a word, domain or some other pattern) X may trigger only if Y and Z was already triggered. And some keyword A may only trigger if B was NOT present.
This way you can distinguish doctors, reporters or people studying history or chemistry from those who plan something.
Or e.g. ML applied to patterns over time. Globally.
And yes I do not like it at all, HN is full of people that may likely research some kind of bomb, religion or whatever else out of pure curiosity, but since there are not many such people it can be problem in court one day.
Mix in some Snowden, your hardware stack, gag orders and the fact that we have more laws that anybody can read and you may feel like watching some stupid memes.
OPSEC is about lowering the probability of things going sideways, there are no guarantees either way
To quote a Dartmouth history professor who taught a class on the subject: "if you don't get _randomly_ selected for a search on your next flight you aren't doing your homework"
It's not prohibited but they notice and subject you to harassment by the system at every action with every part of the system that is integrated with their database.
This seems like hyperbole.
Did they have a warrant? Never talk to the police without counsel, refuse all searches without warrants, "we might think you went on a website" is not probable cause, you have a _right_ to an attorney and silence.
That's extremely disturbing. Accessing some random website should never cause police to show up. They should never even know what you did. That's like keeping tabs on what books people read and raiding somebody's house because they looked up how bombs are made.
Do you work for a government agency or contractor? That might explain why they contacted your employer so readily.
> and a 'threat assessment' coordinator from my workplace.
What was the reason for this? What type of workplace?
I'm assuming any workplace which requires a government security clearance to enter and work in.
I never use a VPN. That changes today.
You should use Tor instead. With VPN, you just shift your browsing history from one place to another.
Or worse yet, the VPN provider can sell your data.
Build your own VPN
https://github.com/hwdsl2/setup-ipsec-vpn
> We are lower middle class white Americans born and raised in the USA and have never traveled outside the country.
I am most curious why you believe that is a defense against radicalization. In the US that is perhaps the most common demographic for radicalization of any type.
Radical to him only goes before Islamic terrorism apparently
This is why you and everyone should use DNS over HTTPS (DoH).
Next day they might visit you to ask you why you are visiting an opposition party web site.
How exactly is DoH a protection? Wouldn't they just see that as a red flag? Then, get the data from cloudflare or whomever.
Most of the time they log your plain DNS queries. But DoH is encrypted, thus they won't be able to log your DNS queries. Cloudflare is not the only DoH provider. There are many. If you want you can grab a several lines of PHP code and create your own DoH link in another country. Becouse DoH is https they cannot distinguish it from normal https. Of course if the use deep packet analyses tool they will know what website you are visiting but they are not being used widely but are used to target specific people. To sum up; DoH is better than plain text DNS queris.
Please disambiguate acronyms in the absence of context.
It absolutely can.
The only words you should ever say to the FBI "on advice of counsel I am taking the fifth".
This is awful advice for this specific situation.
OP apparently managed to clear up the mistake without much bother by speaking to them (although they were understandably shaken up by the experience). This presumably wouldn't have happened if they'd done what you suggest.
Not speaking to law enforcement outside the presence of your attorney is excellent advice. There's no downside to having the attorney there, and potentially life shattering downsides to attempt otherwise.
On the other hand, they could accuse OP of lying (something that's highly subjective), which is a serious federal crime.
The FBI is a criminal organization. Just look at the history the FBI if you think thatâs a radical statement.
They left off one very popular messenger, SMS:
There's also:
* Law enforcement simply asks nicely: can render all message content for the last 1-7 years
The Stored Communications Act makes disclosing the contents of messages without a search warrant unlawful
Just like the NSA spying on Americans is unlawful [0] the FBI terrorizing political movements is unlawful [1] or the CIA operating in the US is unlawful [2]
Yet, I'm pretty sure all these are still happening, to a certain degree, to this day.
[0]
https://www.reuters.com/article/us-usa-nsa-spying-idUSKBN25T...
[1]
https://en.wikipedia.org/wiki/COINTELPRO
[2]
https://en.wikipedia.org/wiki/Operation_CHAOS
Larger point would be if it's obtained unlawfully it can't be used in a court of law against you.
Recent cases:
https://www.vice.com/en/article/pkppqk/court-throws-out-mess...
https://www.computerweekly.com/news/252503524/Berlin-court-f...
That is little consolation in the court of public opinion, where FBI management and the Justice Department have demonstrated willingness and capability to hold mob court and manipulate public opinion outside the formal legal system. They will SWAT you themselves if they like, on live TV.
Parallel Construction makes this a technicality/nuisance, not a show stopper.
came here to say this.
Generally only if you have the means to hire a good lawyer.
The NSA doesn't need to illegally spy on Americans when an ally can do it for them and then share the data legally.
https://www.nationalarchives.gov.uk/ukusa/
https://en.wikipedia.org/wiki/Five_Eyes
That's not really how it works. Sure, it is also a way to circumvent such local legislature, but for that to work American allies would need to run actual surveillance structures in the US mainland proper out in the open..
You know, like the US does in the countries of it's "allies" like Germany [0]
Do you really think the US would allow German intelligence agencies to build whole complexes, plugged right into the US's largest IPX?
That's why this situation is not nearly as "symbiotic" as it's often made out to be. At best that applies to Five Eyes countries, and even there only to a very limited degree as no Five Eyes member as as much foreign presence as the US.
[0]
https://en.wikipedia.org/wiki/ECHELON#Examples_of_industrial...
To this rhetorical question, a resounding âyesâ answer. There is credible suggestion that GCHQ has been invited to operate US facilities on US soil for this explicit purpose.
https://www.theguardian.com/uk-news/2013/aug/01/nsa-paid-gch...
Is it usually legal to compensate some other party to do something illegal? I don't think so. This situation seems something like paying an ambassador to steal something. The ambassador might not be prosecutable, but why isn't the local party? I think the real answer is "power", but that's not good enough.
The people responsible for investigating and prosecuting such crimes have some not so great incentives to avoid doing so and keep the whole thing secret though, don't they?
And then when they get caught, they do this:
https://cdt.org/insights/the-truth-about-telecom-immunity/
Sounds like an easy way to have your case tossed out in court.
It's funny how much this differs from my own personal experience with law enforcement. The friends I know are timid as hell and don't do anything without a warrant just to stay on the safe side- even if they probably don't need one.
Good luck with that. In my case there was a ton of violations of the SCA. Violations of the SCA are only actionable if they are "constitutional" in nature. (That essentially means that if the government indict you based on information they illegally gathered through violating the SCA but the information did not belong to you - say it belonged your wife or business partner - then you can't get the information suppressed/excluded in court)
In my case the government did violate the SCA and my constitutional rights, but two judges have looked at it and both stated the same answer - the police must be allowed to commit crimes to gather evidence. Next stop: appeal courts.
Yep, the courts side with law enforcement. The whole 'truth comes out in a fair fight' is completely undermined by this. The system protects itself above all else.
I was involved with a case that sounds similar - the judges don't care about your rights and blatantly missapply the law. Also, magistrates are also _complete_ BS, and don't even know basic legal stuff. I had one think I called him prejudice when requesting a case be dismissed with prejudice... Complaints do nothing. There's no real oversight, leading to a completely incompetent system.
> _There's no real oversight, leading to a completely incompetent system._
It's the system working as intended. If you want something that looks like justice, you'll need substantial wealth to get it.
You have to generally assume that the FBI and other government agencies are competent. My baseline, starting assumption is that if everyone in the US was too scared to use programs like PRISM, they wouldn't have been built.
So these kinds of claims just don't make any sense in a world where we _know_ that government has conducted surveillance without a warrant, and where we know that the FBI has built entire programs designed to make it easier for them to conduct surveillance without a warrant.
From the article posted that you're replying to:
> What Administration officials tend to obscure is that what they seek is not immunity for future cooperation with lawful surveillance, but rather telecom immunity for assisting with unlawful surveillance conducted from October 2001 through January 17, 2007, as part of the warrantless wiretap program initiated by the White House.
I'm not sure I understand what your implication is. I don't understand how it's possible to respond to an article that is about telecoms seeking immunity for previous unlawful actions by saying, "the government/businesses would be way too scared to do anything unlawful." I mean... obviously not, they sought immunity for it. They wouldn't just randomly do that, the most likely explanation is that they made immunity a pressing issue because _they thought they needed it_.
It does not seem to me that the optimistic world you describe and the observable actions and lobbying efforts of companies/administrations line up with each other.
I'm just glad you're here to stick up for your friends without any corroboration or linking story. It's just a good thing to do.
Being charitable, letâs assume his friends work as homicide or theft detectives. If so, they need a high standard for admissible evidence to build their case.
If on the other hand his friends are street cops tasked with clearing a corner of drug dealers because some neighbor complained to their council person who complained to the police chief then those cops donât necessarily care about extrajudicial activities.
Having been harassed by street cops and interacted with homicide detectives, I can tell you they vary tremendously in professionalism.
They definitely need a high standard for admissible evidence, that doesn't stop them from purchasing large amounts of data from all-too-willing communications companies and using parallel construction to build their case once they find out what happened via warantless spying.
They can also query these messages to see if there is something on the dealers they get paid from and then warn them if something comes up. It works both ways, no?
Cybercrime. Lots of scams and child abuse.
The really smart cops get the tips using âless than legalâ means, then walk back and reconstruct using legal evidence.
âParallel Construction:â
https://en.wikipedia.org/wiki/Parallel_construction
"Sounds like an easy way to have your case tossed out in court."
This is terribly naive in my experience.
Imagine a world where the entire law enforcement complex followed the law. What a world.
Let's be honest, how often do people share with their pals about how they commit crimes, or are less than scrupulous, at work, assuming their pals aren't criminals, as well? People tend to keep things like that a secret, even from people that are close to them.
You are correct. Thereâs also varying 2-party/1-party consent required depending on the state in the absence of a warrant. But unless youâre targeting the devices, you will not get much at all from service providers. They simply donât keep it contrary to what I read here.
EO12333 makes it lawful without a warrant.
> EO12333
An EO making it lawful for a federal agency to collect doesn't mean it is lawful for a private company to disclose, it doesn't change when a company is permitted to disclose the content of messages under the SCA
I mean, this whole discussion is moot since nobody will enforce things like this, especially against themselves.
The reality is that many times the only barrier to sensitive information is a shared login which many people know and a statement that users represent that they have legal authority to access that info.
Tell that to the FAANG companies that provide white glove access to authorities any time they ask.
this guy, lmfao, completely fucking clueless to what policing is
* Law enforcement wants to stalk ex-girlfriend: can render all message content for the last 1-7 years
Companies also sell data to law enforcement.
Many tech companies even develop nice portals for law enforcement to use where they can request and view data, with or without a warrant or subpoena.
Major service providers do not maintain SMS history beyond 24 hours, let alone 1-7 years (last time I worked a case that is). Theyâre transparent about it as well. Look up the LE liaison contacts on their sites and theyâll clearly list what is available or not available. Thatâs why itâs crucial to get the actual devices themselves. Reason: the infrastructure to manage SMS content for every customer for 7 years with zero business justification/use case is phenomenal. Theyâd spend most of their time responding to civil and criminal subpoenas/warrants. That would be a feat the NSA would be proud of. Been there and done that a 100 times. (This also aligns with certain VPN providers refusing to keep logs. Itâs a cost that provides zero returns, so they cut it as a business decision, not because theyâre trying to stick it to the man.
I went to a major cell provider and asked them nicely for access to SMS for all their customers and they happily took money and gave me an API.
This was for a startup.
I have no doubt they do the same for governments.
If I understand this correctly youâre saying a major cell provider is selling you access to subscriber SMS message content?
I'm surprised to hear this has changed so significantly since the snowden leaks. Especially after the blatant attack on Qwest CEO Joseph Nacchio for refusing to spy. It was established then that the major mobile telcos in the USA were keeping and providing sms full data for 2-5 years (t-mobile, at&t, verizon, etc).
and also that the government was subsidizing the programs when the companies complained about the added costs.
There's no reason _for_ them to keep those records, other than for law enforcement's sake. No use case for calling up your operator to ask about that text message you got "from Fred at 4am one day a couple years ago."
> _Major service providers do not maintain SMS history beyond 24 hours, let alone 1-7 years_
Nobody should make decisions based on this comment.
Agreed. Do your own due diligence.
You forgot email... and they don't need a warrant for messages older then 180 days if in the cloud (they never delete them, too):
https://www.consumerreports.org/consumerist/house-passes-bil...
IIRC the only reason this amendment was made was because the 180 day limit was found unconstitutional anyway by an appellate court. So, technically the amendment did nothing.
It doesn't matter where your data is held, locally or cloud, (if you are an American resident and your data is in the USA) as it is _your_ data and it is unconstitutional for them to read it without a warrant. In theory.
> It doesn't matter where your data is held, locally or cloud
In the US it does
Citation?
This ruling has been adopted by the US Supreme Court:
https://privacylaw.proskauer.com/2007/06/articles/electronic...
Look at the link I posted up there? it is 10 years newer then yours
If they are local and encrypted... oops, forgot the encryption key.
Source is a few years old, but I suppose we can make another FOIA request to find out how long carriers store text messages these days - it was basically 0-5 days a decade ago:
https://www.nbcnews.com/technolog/how-long-do-wireless-carri...
Idk... back in the mid 2000s my parents managed to get a transcript of all of my (minor) sister's SMS messages for a few months back (as part of a billing dispute).
Youâll be lucky if itâs any longer than 24-hours now. Thereâs no business use case for building and maintaining the technological infrastructure to manage it for years. Itâs private info and they canât sell it to anyone without legal liability. If LE gave them the funds to build this infrastructure and use it for retention then the service provider is essentially an agent of the state at that point.
You're overstating the technical difficulty of archiving and retrieving text.
I can only imagine that the scale of all US SMS messages is absolutely staggering. It probably eclipses all other text formats combined in terms of daily production. Here's a blog post from a few years ago estimating it at 26 billion text messages per day and rising:
https://www.textrequest.com/blog/how-many-texts-people-send-...
Not counting media and assuming they are all 160 byte messages, that's 4 terabytes per day, or about 200 wikipedia's per day. I guess that's not too bad in terms of storage requirements, certainly a management amount of data for a telecom to store. But assuming that you want those indexed and easily retrievable somehow, it could get very burdensome to manage and interact with, and that tends to balloon the size at least a little bit as well.
The liability and legal issues around it (both externally and internally - don't want employees spying on their exes, leaking data from celebs, in addition to the policing issues, etc) makes it pretty undesirable to store though.
It's about secure messaging
This seems like a good place to say that I strongly recommend Yasha Levine's Surveillance Valley book (
https://www.goodreads.com/book/show/34220713-surveillance-va...
) where he suggests that all of this is working as intended, going all the way back to the military counter-insurgency roots of the arpanet first in places like Vietnam, and then back home in anti-war and leftist movements. The contemporary themes that are relevant are the fact that current privacy movements like Tor, Signal, OTF, BBG are fundamentally military funded and survive on government contracts. It distracts from the needed political discourse into a technology one where "encryption is the great equalizer" and everyone can resist big brother in their own way on the platforms the government has built. Encryption does exist, but it also distracts from other vectors like vulnerabilities (that led to Ulbricht getting caught), what services you would e2e connect to, how you get the clients to connect to those services, what store can push binaries for said clients etc.
Yasha Levine is a conspiracy theorist hack. Thereâs really no other way to say it. His narrative is attractive to a left leaning audience with shallow knowledge in this area, but the reality is that without publicly funded software like Tor, Signal, OTF, and my own Lantern, our world would be more fully saturated with corporate control of the internet. We need more public funding for open source software (with public security audits, mind you), not less. Without them, weâd basically be left with Wikipedia as the only popular entity on the internet outside of corporate control.
All of these projects are more properly grouped with government funding in other spheres, such as the BBC or PBS in media, than they are with the surveillance state or the NSA. Levine overlooks basic details, such as reproducible builds, that quickly collapse the house of cards that is his narrative. He tries to paint them all with the NSA brush, when, in fact, theyâre simply projects that have historically received some of their funding from the government while fulfilling missions with extraordinary humanitarian benefits. Levineâs own knowledge and experience in this area is shallow. Look elsewhere.
I don't disagree with what you're saying. I'm not sure your statement is in disagreement with mine either? I don't think he's saying less OSS is better or anything dogmatic? All he's saying is that using Tor/Signal shouldn't be the end all be all of your surveillance concerns.
> would be more fully saturated with corporate control of the internet
You might disagree. His point was that the "corporate controllers of the internet" support projects like Tor because A) it gives a (somewhat ineffective) channel for people to focus on rather than political recourses and B) there's no real threat to the corporate model. What would you do in this e2e encrypted internet without corporate services?
> such as reproducible builds
Seems like a tangential point. You can have an untampered copy of a client with a vulnerability.
> funding from the government while fulfilling missions with extraordinary humanitarian benefits
I don't think this is in disagreement with anything either
> from the government while fulfilling missions with extraordinary humanitarian benefits
Ahh yes, the famed operation Condor, operation Gladio, operation iceberg and so many other famed "humanitarian" projects
At the end of the day all that you mentioned goes back to a post-facto "it is good because *we* do it", I would go to say that most people here in HN are well aware of the start of Google when it was funded by us Intel as a way to parse Vietnam era datasets, or how US Intel uses Radio Free Asia to destabilize enemy countries abroad, but again, it is only good/not bad when "*we"* do it
Apologies for a rather low quality comment, but these types of persons handwaving the actual structure behind all of this really get on my nerves, specially when I have had family members be tortured as a consequence of these US activities
Iâm certainly not defending all US government actions. Thatâs exactly the point. Levine tries to lump all of this in with surveillance. The US government funds the NSA, that is true. It also funds food stamps. And torture. The trick is to untangle it.
> The trick is to untangle it.
USAID is _specifically_ designed and called that so _as to tangle it_, tell me, how would your average joe understand that USAID is a intelligence agency spinoff designed to sound "good" while doing evil all over the world rather than what its name suggests? You know... Aid?
The NSA, CIA, Extraordinary Rendition and so many other things dont exist there by accident, if said """government""" wishes to spend such amounts of money and resources to enact such evil under the veil of security, then i dont know about you, but then that to me and several other people just reads as "US Gov being flat out evil"
Do remember that there was *wide* support and acceptance back on the Kennedy days to just dissolve the CIA
> Levine tries to lump all of this in with surveillance.
I am not particularly kind to the guy, but he's just merely looking at it on a holistic system design level, any programmer minded person would do the exact same thing when presented with a black box problem
But as far as the foodstamps go, wouldn't it be great if the system where set up in such a way as that foodstamps where not needed to begin with? And on the flipside, why would "the government" allow for such a societal structure where the maintenance of "foodstamps" is necessary for the organization of the nation? I see that last bit in particular if anything as a national security problem...
As Clintonites would say: "It is the economy stupid"
It seems obvious that USAID is an intelligence front (I've encountered a few instances where it was mentioned that someone worked for USAID at the time, while it was simultaneously obvious that it would make way more sense if they were Intelligence), but is there any concrete evidence for that?
> any concrete evidence
What do you mean by "concrete evidence"?
Nothing of this is disputed, they even have their own wikipedia pages for their different operations and branches within USAID
https://en.wikipedia.org/wiki/Office_of_Public_Safety
*Specially* that we are talking of USAID, on the case of NED for example, things get slightly murkier because then it is a matter of private rather than public record, but it still works as a tool for management of semi-clandestine operations and operations which need plausible deniability from CIA's end, or at least as much deniability as it can muster, tho these days they prefer to work with shell groups and other associated partners such as for example Atlas Network, Radio Free Asia also falls on that category, same with Voice Of America
If you are interested in books both, Killing Hope by William Blum and Legacy Of Ashes by Weiner are very, very, very good authoritative sources on the matter
If you prefer podcasts, Warnerd Radio has a couple very good episodes on the National Endowment For Democracy, tho they both quote excerpts of the books above
Radio War Nerd EP 274 â National Endowment for Democracy, Part 1
https://podcastaddict.com/episode/121232504
Radio War Nerd EP 275 â National Endowment for Democracy, Part 2
https://podcastaddict.com/episode/121522126
While those programs certainly existed this is blatant a false equivocation, you can still have humanitarian programs while being a military hegemony. It's not one or the other.
This is in fact a distinct reason CIA/NSA (and vice versa) won't accept recruits who have served in the peace corp previously, amongst other reasons.
This comment is an incredibly naive attempt at a smear.
> Without them, weâd basically be left with Wikipedia as the only popular entity on the internet outside of corporate control.
Wikipedia is absolutely not "outside of corporate control". It is trivially astroturfed to advance special interests.
> All of these projects are more properly grouped with government funding in other spheres, such as the BBC or PBS in media
Both BBC and PBS routinely publish outright disinformation to advance the special interests of their corporate/government clients, including the intelligence community. For example, look at PBS Frontline's ridiculous puff piece for the violent extremist group HTS last year.
> Levine overlooks basic details, such as reproducible builds
Reproducible builds are also easily circumvented by _selectively_ deploying backdoors and other malware, based on IP or other fingerprints.
If there are good reasons to dispute Levine's investigative journalism, they're not here.
Um, ok. All of the above projects use not only reproducible builds for many platforms, but theyâre all open source, and they all have public security audits. Those three pillars are about as good as it gets. Is there something you would add?
Iâm not claiming PBS and the BBC are perfect entities, but they do offer an alternative source of information that runs against the grain of corporate media. You would preferâŚwhat exactly?
> Is there something you would add?
Let's start with "not being created/funded by the State Department or Pentagon".
> You would preferâŚwhat exactly?
Again, let's start with "not being blatant propaganda produced by warmongers".
First, thereâs a vast difference between the state department and the pentagon. Lumping those two together just reflects an unsophisticated understanding of the federal government. Signal has never received any state department or pentagon money. Tor had a significant early contribution from a researcher at Naval Research. Thatâs the extent of any pentagon funding. They have received significant state department funding, but to call the state department âwarmongersâ is just not accurate.
Please stop spreading misinformation. From the Tor Project's public IRS documents:
> WHILE FUNDING FOR TOR ORIGINALLY FOCUSED ON BASIC RESEARCH TO BETTER UNDERSTAND ANONYMITY, PRIVACY, AND CENSORSHIP-RESISTANCE, THE MAJORITY OF FUNDING NOW FALLS INTO THREE CATERGORIES: DEVELOPMENT FUNDING FROM GROUPS LIKE RADIO FREE ASIA AND DARPA TO DESIGN AND BUILD PR OTOTYPES BASED ON RESEARCH DONE BOTH INSIDE TOR AND ALSO AT OTHER INSTITUTIONS; DEPLOYMENT FUNDING FROM ORGANIZATIONS LIKE THE US STATE DEPARTMENT AND SWEDEN'S FOREIGN MINISTRY; AND UNRESTRICTED CONTRIBUTIONS FROM PRIVATE FOUNDATIONS, CORPORATIONS, AND INDIVIDUAL DONORS FOLLOWING IS A BREAKDOWN OF THE TOR PROJECT'S FUNDING SOURCES FOR THE PERIOD ENDED JUNE 30, 2020: FUNDING FROM US GOVERNMENT SOURCES US STATE DEPT - BUREAU OF DEMOCRACY, HUMAN RI GHTS AND LABOR 752,154 GEORGETOWN UNIVERSITY - NATIONAL SCIENCE FOUNDATION 98,727 RADIO FR EE ASIA/OPEN TECHNOLOGY FUND 908,744 NEW YORK UNIVERSITY - INSTITUTE OF MUSEUM AND LIBRARY SERVICES 101,549 GEORGETOWN UNIVERSITY - DEFENSE ADVANCED RESEARCH PROJECTS AGENCY 392,00 8 FUNDING FROM NON-US GOVERNMENT SOURCES DIGITAL IMPACT ALLIANCE - UNITED NATIONS 25,000 S WEDISH INTERNATIONAL DEVELOPMENT COOPERATION AGENCY (SIDA) 284,697 FUNDING FROM CORPORATE SOURCES MOZILLA 157,500 AVAST 50,000 MULLVAD 50,000 FUNDING FROM PRIVATE FOUNDATIONS OPEN SOURCE COLLECTIVE 23,100 MEDIA DEMOCRACY FUND 270,000 ZCASH FOUNDATION 51,122 MOZILLA OPEN SOURCE SUPPORT MOSS 75,000 RIPE 53,114 CRAIG NEWMARK PHILANTHROPIC FUND 50,000 STEFAN THO MAS CHARITABLE FOUNDATION 50,000 KAO FOUNDATION 10,000 MARIN COMMUNITY FOUNDATION 1,000 IN DIVIDUAL DONATIONS 890,353
Yes theyâve received funding from DARPA. I realized I forgot that after I posted. Good catch. To my knowledge, that funding is for new anti-censorship transports to sneak traffic in and out of censored countries.
And the State Department are definitely warmongers.
SecState Kissinger orchestrated the incineration of Laos, Cambodia and Vietnam.
SecState Powell orchestrated the flattening of Iraq.
SecState Clinton orchestrated the butchering of Libya.
SecState Pompeo tried and failed to orchestrate the annihilation of Iran by assassinating top officials and drawing them into war.
And so on and so forth. These aren't even theories. The State Department is closely involved in destabilizing sovereign governments through the full spectrum of means, including war, to advance Washington's interests.
>my own Lantern
Brilliant reposte, but I am curious what software are you referring to here?
A quick look through their comment submissions points at
:
https://news.ycombinator.com/item?id=20824759#20826587
Signal isn't funded by the military, by OTF/BBG, or any branch of the USG government. People who claim otherwise are confused (deeply) about a program OTF ran that sponsored third-party security reviews and development projects (summer-of-code style), none of which was mediated through OTF --- it was just a bucket of money.
You should be extremely skeptical about people who bring OTF/BBG up in these discussions. I have complicated feelings about Tor stemming mostly from culture and effectiveness concerns and would push back on claims that it's co-opted by the Navy or corporate interests, but at least I can see a clear (if silly) line connecting Tor to these supposed conflicts of interest.
> Signal isn't funded by the military
Correct, it is not funded by "the military", but this is incorrect
> any branch of the USG government
Because Signal/TextSecure received considerable amounts of seed capital from Radio Free Asia which is a CIA spinoff with the explicit aim to fund the development of the cryptography at grass roots level, not per se to have full control of it like NSA would have done, but because having strong cryptography on such platforms (Telegram might be other) is highly effective against perceived US enemies like well... Iran, or Syria, and to allow their assets/agents to communicate more easily while abroad without bulky extra proprietary phones or software
All of that above is mentioned at length on Surveillance Valley btw
It's like you read 4 words from my comment and stopped.
As I understand it the technology behind Tor is strengthened by an arms race. You _want_ several different well-funded entities running nodes, because that makes the service better for everybody. Even if some of those entities are _hostile_ they still help unless one entity controls a large portion of interior nodes and even then you're only giving metadata to that single entity (whichever it is) by using Tor, not anybody else - which is better than you're going to do with alternative technologies.
This analogy unfortunately cuts both ways, if you've got technology that undermines the majority government / power structure in a secure fashion, you'll always have the ability to come in as a intelligence agency and foment an insurgency movement.
Which also unfortunately points to them having exploits no one has discovered yet in said technology tools.
They can still maintain generalized situational control via additional superiority vectors(MASINT, HUMINT, GEOINT, OSINT, FININT etc.)
Ulbricht was caught via poor OPSEC and not via a Firefox/Tor 0day afaik. Though there was/is speculation that a Firefox/Tor 0day was used to bring down some Tor markets and possibly to locate the Silk Road's server. Silk Road 2.0 was brought down in like a few months, which could indicate such a 0day existed. Or that it was ran by some former Silk Road staff members who got doxed when Silk Road 1.0 was shut down.
Ulbricht was caught because an FBI agent, who would read things slowly and twice, recognized these 4 letters : _heyy_.
That's how Ulbricht sometimes spelled _hey_, and the agent had seen that particular spelling before in his investigation, in an email from Ulbrictâs student email address.
Nick Bilton's book âAmerican Kingpin: The Epic Hunt for the Criminal Mastermind Behind the Silk Roadâ is a great read, highly recommended.
it strikes me as extremely naive to take this at face value. see
https://www.reuters.com/article/us-dea-sod-idUSBRE97409R2013...
much more likely -- sigint tooling was applied to identify ulbricht, bulk metadata was turned over for his comms history, and it was pored over for things they could connect with sr to get warrants. imo, at least.
but getting to claim you're such a sharp investigator that you can figure it out by noticing the word _heyy_ makes for a much better story to tell an author.
It was more complicated than just _heyy_, but I won't spoil the book.
It's been awhile since I've read it, but my impression was that solving the case was mostly traditional casework, and a lot of it, by many different people/agents/agencies.
That Reuters article certainly gives pause. Thanks for the link.
That's what they want you to think. He was caught because; Nothing can match against the surveillance arsenal of the NSA.
That's not what I think, that's what Nick Bilton thinks. The quality of his book makes me partial to his thesis, of course, but NSA conspiracy blah adds nothing.
Also, lots more went into catching him than just _heyy_, but that was the lucky break that had him caught. Now he shares a prison with Dr. Unabomber Kazinsky.
That could be the story but since parallel construction is routinely used to hide the existence of surveillance tools and back doors itâs not unreasonable to doubt it.
I thought I had heard it was stackoverflow, is that looped in somehow?
I don't recall StackOverflow being mentioned, no, but it's been a few years since I've read it.
https://slate.com/technology/2013/10/silk-road-s-dread-pirat...
Correction: He was transferred to a penitentiary in Tucson, Arizona.
Have to admit. I was impressed with the USGOVs ability to recover bitcoin ransoms paid for cyberattacks. I'm not sure if impressed is the right word.
Wtf, who doesnât add extra yâs to hey sometimes? That wasnât evidence.
I don't want to spoil the book; but, yes, that detail got him caught.
Itâs not fiction youâre spoiling, but a factual conversation about events that youâre not going into due to spoilers. It is an odd defence that kills the conversation when other people bring up good points.
The parallel construction argument seems way more plausible if thereâs nothing else besides âheyyâ. If there is more, please say what it is instead of mentioning it exists but refusing to say it.
Where is any evidence of Tor being a military surveillance project? I find it hard to believe an open source project like this has been infiltrated. Yes, there is suspicion some ECC curves are compromised, but only the ones provided by NIST. I'd really like to see evidence of Tor.
The seed for that line of thinking is the fact that a US Navy lab built it.[0] Having said that, I believe that's the only basis and is a far cry from making the theory convincing or even probable.
[0]
https://en.m.wikipedia.org/wiki/Tor_(network)
Wow, I feel like an idiot. All this time I had no idea the Navy built it, when a simple Wiki search would have pointed that out. Thanks!
âThe Navy built itâ is a bit of an exaggeration. Paul Syverson did early work on it at the Naval Research Lab, and Roger Dingledine and Nick Mathewson added to the collaboration at approximately the same time, with neither having anything to do with the Navy. Thatâs the extent of the military connection - some relationship in the first year or so of an 18 year or so project.
There's a been a suspiciously downplayed number of ephemeral hidden services that get raided / internationally taken down on the Tor network for it to be mere circumstance.
No one tries to take notice since they're hosting the worst content on the internet regularly.
Could as well be insiders though and operations that were planned for years.
Did you even click on the link? Signal gives away NOTHING.
Thank you. I never knew the source of the ridiculous theory that the internet sprang from spying attempts on the Vietnamese. I am always looking for keywords to filter conspiracy weirdos. Yasha Levine added
"are the fact that current privacy movements like Tor, Signal, OTF, BBG are fundamentally military funded and survive on government contracts."
Are those "facts" avaiable for investigating, without having to buy the book?
(that Tor is partly US administration funded is known, but Signal? And what is OTF and BGG?)
https://www.opentech.fund/results/supported-projects/open-wh...
Funded by Open Technology Fund (OTF)
https://en.wikipedia.org/wiki/Open_Technology_Fund
Which is funded by Radio Free Asia (RFA)
https://en.wikipedia.org/wiki/Radio_Free_Asia
. It had a few reboots but was created as a CIA program in 1951 (
https://en.wikipedia.org/wiki/Radio_Free_Asia_(Committee_for...
) to blast shortwaves into China from Manilla to try to overthrow the Chinese government. Rebooted more recently since the advent of the great firewall of China.
Wow, that is so thin it is transparent. If this is the sort of 'proof' that we are going to find then I am glad you posted the ref here so that I could add yet another kook to the list of those whose privacy/security rantings and books I can ignore. The biggest danger to long-term privacy projects is not the risk of taking advantage of an opportune partnership with a government agency when incentives align, it is conspiracy nutjobs poisoning the well with their paranoia and delusions.
And Signal?
The main tool, used for private communication?
So if you have something to hide, don't use iCloud backup.
And Whatsapp will give them the target's full contactbook (was to be expected), but _also_ everyone that has the target in their contact list. That last one is quite far reaching.
> if you have something to hide
Most people don't realize that most people have something to hide. The USA has so many laws on its books. Many of which are outright bizarre[0] and some of which normal people might normally break[1].
And that's only counting _current/past_ laws. It wasn't that long ago a US President was suggesting all Muslims should be forced to carry special IDs[2]. If you have a documented history being a Muslim, it could be harder to fight a non-compliance charge.
[0]
https://www.quora.com/Why-is-there-a-law-where-you-can-t-put...
[1]
https://unusualkentucky.blogspot.com/2008/05/weird-kentucky-...
[2]
https://www.snopes.com/fact-check/donald-trump-muslims-id/
I always liked this one I found in the Illinois statutes - it basically criminalizes every person online:
Barratry. If a person wickedly and willfully excites and stirs up actions or quarrels between the people of this State with a view to promote strife and contention, he or she is guilty of the petty offense of common barratry[.]
https://www.ilga.gov/legislation/ilcs/ilcs4.asp?DocName=0720...
Barratry typically implies that this is specifically being done with lawsuits and other legal instruments, not in the general case.
There is a renaissance of such laws regarding causing offense. That would basically be anybody whose face you don't like? I wonder how much considerations go into suggestions like this. Side effects should normally hit your face like a truck.
Did you even read the snopes article you referenced before making what seems like a definitive claim about how Trump was suggesting muslims carry special IDs? Because Snope's own rating is "Mixture" of truth and false and if you read the assessment, it is grasping at straws to even make that conclusion.
Yes, "mixed" means you have to read the nuance. I think I accurately captured the reality. If you have a correction to offer, please do.
EDIT: Ultimately, the nuance in that history is not relevant to the point that criminal law changes to include new categories in unexpected ways.
Sure, I can accept there is some nuance but the phrasing and definitive manner of your original statement is very misleading. I'm not the biggest fan of the guy but casually mentioning that he suggested the idea when in actuality it was an idea posed by a reporter is bad faith in my opinion.
> âCertain things will be done that we never thought would happen in this country in terms of information and learning about the enemy,â he added. âWeâre going to have to do things that were frankly unthinkable a year ago.â
> âWeâre going to have to look at a lot of things very closely,â Trump continued. âWeâre going to have to look at the mosques. Weâre going to have to look very, very carefully.â
That's all he said to the interviewer. The interviewer was asking the hypothetical and suggested the special identification! He wouldn't take the bait, so since he didn't answer the hypothetical they said "he wouldn't deny it" and wrote the campaign of hit piece articles anyway. Whatever response they got they would have wrote that same piece. If he would have answered one way they would have quoted out of context. Since he responded generically it's obviously drummed up. The fact check is hilarious. "Mixed", lol.
Never answer a hypothetical, it's always a trap.
Your last sentence just made me freak out thinking that I've previously done such stupidity in front of a "law officer".
I never for one second thought it could be a trap; I was overly willing to cooperate and truthfully respond to a "theoretical" inquiry. Damn, it hurts in retrospective.
> That's all he said to the interviewer
And then the next day, he clarified:
Reporter: "Should there be a database or system that tracks Muslims in this country?"
Trump: "There should be a lot of systems, beyond databases. I mean, we should have a lot of systems."
And then he tried to backpedal. Decided it was a watch list, not a database, etc. Basically the usual shtick of his where he tries to say everything and nothing at the same time.
Again that's a generic response:
> There should be a lot of systems, beyond databases. I mean, we should have a lot of systems
Beyond databases. What does that mean? That could be analog systems, that could be anything not stored in a computer.
Nothing to do with identification which would need a database. It's a generic answer to avoid a hypothetical. It's a nonanswer.
He said nothing, not everything. You are attributing the reporters question to him. The reporter is posing the hypothetical that they created in the first place by the initial interview.
My main point was hypotheticals are always trap (unless among friends!), but that's a great example of an obvious one.
The usual shtick is to say nothing, because the journalistic usual shtick is to ask gotcha hypotheticals.
You're kind of quibbling over details. The below quote is already bad enough:
> "Weâre going to have to look at the mosques. Weâre going to have to look very, very carefully."
I already do not trust the person who has said that. Does it really matter if he proposed a full-fledged ID system? He still proposed monitoring mosques. He still proposed surveillance based on religious identity.
The correct answer to that question, "should Muslims be subject to special scrutiny" is a simple "no". I don't really get the debate about hypotheticals; this a question that does have a straightforward, right answer. And the implications here in regards to surveillance and ordinary people having stuff to hide -- those implications are all the same regardless of whether or not Trump actually proposed a literal database.
He was open to increased surveillance on Americans based on their religious identity, he didn't immediately shut the idea down.
Details are important. The media campaigns are claiming he wanted Muslim identification, a system THEY proposed in their hypothetical. When he didn't confirm they said "he wouldn't deny it" as their proof of support.
> The below quote is already bad enough. He still proposed surveillance based on religious identity.
He said nothing about citizens or monitoring them based on religious identity. He said look at mosques, that's all. Mosques are often the target of attacks.
https://search.brave.com/search?q=mosque+coordinated+attack&...
Are you proposing that increased surveillance of mosques is to _protect them?_ That requires a certain level of imagination given the full context of the quote:
> "Certain things will be done that we never thought would happen in this country in terms of information and learning about the enemy," he added. "Weâre going to have to do things that were frankly unthinkable a year ago."
> "Weâre going to have to look at a lot of things very closely," Trump continued. "Weâre going to have to look at the mosques. Weâre going to have to look very, very carefully."
----
And once again, it kind of doesn't matter. An increased focus on monitoring places of worship _is_ monitoring people based on their religious identity. I don't know a single Christian who would argue to me that monitoring churches isn't the same thing as monitoring Christians.
Mosques and churches are not abstract concepts that are divorced from the people inside of them. When you monitor an institution, you are necessarily monitoring the people inside of it, and it is reasonable for them to be concerned about the government taking an interest in their religious-identity. To argue otherwise requires someone to completely divorce religious identity from the _practice_ of religion, and that's just not a reasonable argument to make.
----
> Details are important.
Not in the context of the original statement, "ordinary people often do have something to hide, and should care about privacy." Look, whatever, you trust Trump. You shouldn't, but you do. Fine.
Do you trust Biden? Do you trust the current government not to attempt to monitor you based on your vaccine status?
You're fighting over the idea that "your guy" wouldn't surveil ordinary people, but this also kind of doesn't matter because your guy isn't in the Whitehouse right now, and I can guarantee you that Republicans are never going to have permanent power over the government. No party wins forever. You have as much reason as anyone else to care about personal privacy, why are you fighting over who specifically is a threat? Does it change anything about the overall privacy debate?
> Again that's a generic response
Like I said, he always manages to say exactly the right things so the people who support him will read between the lines, but leave just enough ambiguity so those same people can quibble constantly over whether that was what he really meant.
> hypotheticals are always trap
He could have just said "No." Or "I have no such plans at this time." if he wanted to sound like a typical politician. His circumlocution is legendary, because it allows _everyone_ to believe what they want to believe. Politicians all have this problem, but Trump elevates it to a whole new level.
You and the person you are communicating with must both not use iCloud backup. And since apple pushes the backup features pretty heavily, you can be reasonable sure that the person you are communicating is using backups. IE, you cannot use iMessage.
I got off all Apple products when they showed me their privacy stance is little more than marketing during the CSAM fiasco, but IIRC the trouble with iCloud backup is it stores the private key used to encrypt your iMessages backup. Not ideal to be sure, but wouldn't iMessage users be well protected against dragnet surveillance, or do we know that they're decrypting these messages en masse and sharing them with state authorities?
You wouldn't think most large states have hacked apple's icloud backup servers 20 times continuously at this point?
iCloud backup can backup your whole phone, specifically the files section. iOS and OSX users can save anything to that.
Has Apple made any public statements regarding iCloud's lack of privacy features. It takes the wind out of their privacy marketing that is effectively hurting ad tech but not truly protecting consumers from state-level actors with data access.
Kind of. These details are indeed publicly written on their website[0]. Do many users ever read this page? Probably not.
[0]
https://support.apple.com/en-us/HT202303
Here is an excerpt. The language sounds like encryption is enabled and the chart includes iCloud features as server and in transit protected. Seems like smoke and mirrors then.
> On each of your devices, the data that you store in iCloud and that's associated with your Apple ID is protected with a key derived from information unique to that device, combined with your device passcode which only you know. No one else, not even Apple, can access end-to-end encrypted information.
E2EE was in the iOS 15 beta for backups but it was removed? (Did not land for release) after they changed the time table of CSAM scanning feature. So we will see if we get E2EE backups once that image scanning lands.
Can you turn that off if you have icloud or do you need to not use icloud all together?
Yes, and you can delete old backups on iCloud - and then switch to local, automatic, fully encrypted backups to a Mac or PC running iTunes.
HN tends to get very frothy-at-the-mouth over Apple and privacy but the reality is that iPhones can be easily set up to offer security and privacy that best in class, they play well with self-hosted sync services like Nextcloud....and unlike the Android-based "privacy" distros you're not running an OS made by a bunch of random nameless people, you can use banking apps, etc.
The only feature I miss is being able to control background data usage like Android does.
You can turn it off individually just for Messages, but you're still left not knowing the state of the setting on the other end.
If you have something to hide, don't give a copy to _any_ third-party.
even a second-party is a risk.
It says Telegram has no message content. Isn't telegram not E2EE by default, instead required explicit steps to make a conversation encrypted?
Either way looks like Signal wins by a lot. The size of it spot is so small, it seems almost squeezed in. But only because they have nothing to share.
for signal users this means the messages of course _do_ exist on your phone, which will be the first thing these agencies seek to abscond with once youre detained as its infinitely more crackable in their hands.
as a casual reminder: The fifth amendment protects your speech, not your biometrics. do not use face or fingerprint to secure your phone. use a strong passphrase, and if in doubt, power down the phone (android) as this offers the greatest protection against offline bruteforce and sidechannel attacks used currently to exploit running processes in the phone.
My advice if youâre not on the level where three letter agencies are actively interested in your comings and goings:
- Use a strong pass phrase
- Enable biometrics so you donât need to type that pass phrase 100 times per day
- Learn the shortcut to have your phone disable biometrics and require the pass phrase so you can use it when police is coming for you, youâre entering the immigration line in the airport etc. - on iPhone this is mashing the side button 5 times
> Learn the shortcut to have your phone disable biometrics and require the pass phrase
On my Pixel (Android), it's hold the power button for ~2 seconds then select Lockdown.
In case anyone with an Android is confused because they don't see the option: I believe that you have to explicitly enable the Lockdown option in Android's system settings before it shows up.
There are a couple of apps that will also lock down with a tap instantly. I'm sorry I forget the names though, but handy if you have it in hand and "open". I have been using iphone too long now to remember the names of the apps though. you can put a shortcut on every "page" of your android and tap it, it enforces locking the phone by passcode. so on most phones it would be a swipe and a tap, probably less than a 200 milliseconds if you practiced it.
On recent iPhones, the way to disable biometrics is to hold the side button and either volume button until a prompt appears, then tap cancel. Mashing the side button 5 times does not work.
Not sure how recent you're talking but I have an iPhone 11 Pro and I just tested pressing the side button 5 times and it takes me to the power off screen and prompts me for my password the same way that side button + volume does.
Apple's docs also say that pressing the side button 5 times still works.
> If you use the Emergency SOS shortcut, you need to enter your passcode to re-enable Touch ID, even if you don't complete a call to emergency services.
https://support.apple.com/en-us/HT208076
Pressing it five times starts the emergency SOS countdown (and requires the passcode next time) on my iPhone XS. Maybe you have the auto-calling disabled?
It doesn't on my 2nd Gen iPhone SE (2020). That said, anything that causes the "swipe to power off" screen to appear has the same affect, so essentially holding down the button for 5 seconds does the trick.
The side button 5 times thing is disabled by default, but can be enabled from Settings > Emergency SOS.
I just verified this on iOS 15.1 on an iPhone 12.
Works fine on my 11, my wifes 12, her backup SE gen 2 and my backup SE gen1.
Just tested all of them
Iâm on an iPhone 13 and the latest iOS and it does work here. But so does your methodâŚ
But I guess yours is the âofficialâ way to do it indeed:
https://www.imore.com/how-quickly-disable-face-id
If you _are_ at the level where TLAs are interested in you they will not give you a chance to mash that button. You will have a loaded gun pointed at your head out of nowhere and you will freeze. From experience.
Is that a story you mind sharing?
He got popped for pedophilia if I remember correctly.
Not sure why this is downvoted; you are right.
I use this app on my phone
https://f-droid.org/en/packages/com.wesaphzt.privatelock/
It locks the phone when a movement threshold is broken, and then requires the password instead of biometrics to unlock the phone.
So the snatch the phone when it is unlocked vector gets harder.
In most cases you are going to want to separately passphrase your messaging stuff so it is locked up when you are not using it. That makes every thing else a lot easier. For example, there is a Signal fork that supports such operation:
*
https://github.com/mollyim/mollyim-android
So you're saying I should have to type a secure passcode every single time I want to read or send a message on my phone?
No thanks.
I think that it would stay unlocked for a time, possibly till you locked it. Possibly such an arraignment would be more practical for something offline like encrypted email.
A compromise would be to just save the messages to a passphrase. You could use a public key so that you would only need the passphrase to read the old messages. I haven't heard about anything that actually does this.
I just tried this an it does not work for iPhone is it only on a certain iOS? I am a bit behind on updates. Thanks
That's actually the old method for iPhone 7 and before. Now, you can activate emergency SOS by holding the power button and one of the volume buttons. Assuming you don't need to contact any emergency contacts or services, just cancel out of that and your passcode will be required to unlock.
https://support.apple.com/en-us/HT208076
Try: Hold "volume up" and "power" for 2 seconds
You'll feel a vibration, and biometric login will be disabled until you enter your passcode.
That did the trick thanks. But ultimately Iâm behind on updates so my phone could probably be broken into trivial with the forensic tools available to most law enforcement. Iâm going to update soon.
Don't have any family or friends, either. If you refuse to talk and invoke your rights the government will just threaten to hurt those you love until you break and give up your passwords. From experience.
I liked it in Wrath of Man where one guy is acting tough as fuck until they bring his girl into the room.
Also, if you can, if you are encrypting data, use a hidden volume inside the first - that way you can give the government the outer password and they'll be happy thinking they have everything.
Signal recently added 'disappearing messages' which lets you specify how long a chat you initiate remains before being deleted.
Not "recently". Disappearing messages have been there for at least 5 or 5 years.
Almost _all_ my Signal chats are on 1 week or 1 day disappearing settings. It helps to remind everyone to grab useful info out of the chat (for example, stick dinner plan times/dates/locations into a calendar) rather than hoping everybody on the chat remembers to delete messages intended to be ephemeral.
The "$person set disappearing messages to 5 minutes" has become shorthand for "juicy tidbit that's not to be repeated" amongst quite a few of my circl3es of friends. Even in face to face discussion, someone will occasionally say something like "bigiain has set disappearing messages to five minutes" as a joke/gag way of saying what used to be expressed as "Don't tell anyone, but..."
(I just looked it up,
https://signal.org/blog/disappearing-messages/
from Oct 2016.)
Maybe it was only added recently on the desktop client.
Keep in mind that any time a message is on flash storage there might be a hidden copy kept for flash technical reasons. It is hard to get to (particularly if the disk is encrypted) but might still be accessible in some cases.
I think encrypted messengers should have a "completely off the record" mode that can easily be switched on and off. Such a mode would guarantee that your messages are never stored anywhere that might become permanent. When you switch it off then everything is wiped from memory. That might be a good time to ensure any keys associated with a forward secrecy scheme are wiped as well.
And a screenshot, or another camera, or a rooted phone can easily defeat that.
The analog hole ALWAYS exists. Pretending it doesnt is ridiculous.
> _And a screenshot, or another camera, or a rooted phone can easily defeat that._
Not if the message has already been deleted. Auto-deleting messages are so the recipient doesn't have to delete them manually, not so the recipient can't possibly keep a copy.
Exactly this. Even more: Auto-deleting messages are also that the sender doesn't have to delete them manually. Most people do not understand this. I even had a discussion with an open source chat app implementer who insisted on not implementing disappearing messages because they couldn't be really enforced.
That's a different threat model, no messaging app is trying to protect the sender from the receiver. Disappearing messages are meant to protect two parties communicating with each other against a 3rd party who would eventually gain access to the device and its data.
Wickr has a "screenshot notification to sender" feature (which of course, can be worked around by taking a pic of the screen without Wickr knowing you've done it).
What made you think I was pretending it doesn't?
Also IOS has a panic button. Hit the main/screen button (on the right) five times really fast and faceid/touchid is disabled and passcode is required
Your statement on the 5th amendment is no longer accurate broadly, but the matter still has some cross-jurisdictional disagreement:
https://americanlegalnews.com/biometrics-covered-by-fifth-am...
District courts don't make law. Magistrates working for those district courts even less so. The case this news article cites has no precedential value anywhere - not even within N.D.Cal. - and should not be relied upon.
IAAL but IANYL
Agreed. That decision is unlikely to be repeated by any appellate court. IMO, all the rulings on biometrics not being testimonial are constitutionally correct, even if that sucks. A lot of constitutional rulings suck.
The real solution is for a federal statute to require warrants.
> do not use face or fingerprint to secure your phone
but can't they force you to put your password in that case, instead of your finger?
In general, no.
The contents of your mind are protected because you must take an active part of disclose them. Of course, they can still order you to give them the password and stick you in jail for Contempt of Court charges if you don't.
Check out Habeas Data. It's a fascinating/horrifying book detailing much of this.
To err on the side of caution, it's best to make all your passcodes themselves an admission to a crime.
"Your honor, the state agrees to not prosecute on any information inferrable from the text of the password."
"Understood. The defendant's Fifth Amendment right to protection from self-incrimination is secured. As per the prior ruling, the defendant will remain in custody for contempt of court until such time as they divulge the necessary password to comply with the warrant."
I don't know why you're being downvoted. For a start, if it was a third party that had the passcode and refused to divulge it they can be held in jail until they release it, e.g. if your wife knows it. (There are many cases where people have been sentenced to years or decades in prison for not testifying)
If it is you not divulging your own passcode, then legally the judge can't give you contempt, but in reality they could give you contempt until you fought it through the appellate court. Contempt is a special type of thing - certainly here in Illinois you have no right to a jury trial on contempt charges. You're just fucked.
I believe judges can, in fact, hold a defendant for refusing to give up their own passwords, and that the contempt could be indefinite. This is a point of law that is not settled at the federal level yet, and at the state level it varies from jurisdiction to jurisdiction.
In one case, the appellate court at the federal level simply refused to hear the case that had been decided at the sate supreme court level.
https://www.reuters.com/business/legal/us-supreme-court-nixe...
They don't actually need your passphrase to unlock your phone - they just need somebody with the passphrase to unlock in for them. And if there's any doubt about who that is, then having that passphrase counts as testimonial; but if there's not - it might not count as testimonial.
Although there are apparently a whole bunch of legal details that matter here; courts have in some cases held that defendants can be forced to decrypt a device when the mere act of being able to decrypt it is itself a foregone conclusion.
(If you want to google a few of these cases, the all writs act is a decent keyword to include in the search).
The defendant never needs to divulge the passphrase - they simply need to provide a decrypted laptop.
We really should up our game on encryption, perhaps some kind of time-based crypto rotation that inherently self-destructs rendering the data unusable if you don't authenticate with it every so often. If you are physically unable to unlock a device you can't be compelled to do so.
My passwords are so obscene it's a crime to write them down.
great, so they'll just be able to hit you with lewd charges on top of everything else they are filing.
I think a fingerprint is easier to get if youâre not willing to cooperate. However, I think if they really, I mean really want your password, they will probably find a way to get it out of you. I think it also depends if itâs the local sheriff asking for your password or someone from the FBI while youâre tied up in a bunker somewhere in Nevada.
Apple should allow for 2 PWs, one the real PW, the other triggers a "self-destruct" mode.
Knowing that is possible law enforcement would then hesitate to ask.
_using_ such a self-destruct mode would be a certain way getting yourself charged with destroying evidence/contempt of court/... though.
This would be difficult to prove. They would have to know for certain the evidence was on there to begin with. I don't see the prosecutor easily meeting their burden of proof on this charge.
This is how the statute is worded here in Illinois:
"A person obstructs justice when, with intent to prevent the apprehension or obstruct the prosecution or defense of any person, he or she knowingly commits any of the following acts: (1) Destroys, alters, conceals or disguises physical evidence."
Ugh. It's a vague law. I don't even know how they would prosecute that for virtual evidence held on a device that they didn't already have a view inside of.
i was under such duress that i was shaking so badly that i made typos in my 30 character password 10 times. the loss of evidence is not my fault as it is the people putting me under that duress. don't think it'll hold up though
No 5th Amendment protection? If you spoke the command / "password", would it matter?
FaceID can already prevent a device from unlocking if someone is sleeping. In theory devices could detect if they were being unlocked "under duress" by using biometrics to look at facial expressions, heartbeat, etc, and then wipe themselves. I don't know how practical in reality but perhaps it could be a feature you turn on in a sensitive environment.
How? They can physically overpower you and place the sensor against your finger, or in front of your eye and pry it open without your consent and gain access with 0 input from you. How do they similarly force you to type something that requires deliberate, repeated concrete actions on your part?
In my case they threatened to harm my wife if I didn't stop refusing. After my case is over I'll happily release the video tapes so you can see how this shit works.
Please do. Very few people realize just how bad things can get with law enforcement.
https://arstechnica.com/tech-policy/2020/02/man-who-refused-...
no. The 5th amendment has been read weirdly by the supreme court.
The fifth amendment doesn't protect either speech or biometrics. Nor does it protect passwords.
You are wrong. It protects passwords as speech, as they are testimonial, per many court rulings. It does not protect biometrics based on law that basically says the police can force you to give up your fingerprints for their records, so they can sure as fuck force your finger onto a reader.
> It protects passwords as speech, as they are testimonial, per many court rulings.
Not true.
https://www.reuters.com/business/legal/us-supreme-court-nixe...
for example.
Can they force someone to LOOK at the phone? FaceID with attention check will need you to look before it opens.
Arguably, yes. That's why it's important to know the shortcut on iOS to render faceid inoperable until you give it the password - mash the power button five times fast!
Telegram is encrypted OVER THE WIRE and AT REST by default with strong encryption no matter what you do. It's E2EE if you select private chat with someone.
Lots of FUD out there there about Telegram not being encrypted that's just not true. There's nothing either side can to do send a message in clear text / unencrypted.
"Encrypted OVER THE WIRE and AT REST" means that telegram has easy and unfettered access to chat logs. So they can give it up to authorities. (I don't argue that they DO, just that they very much CAN).
This is proven by an extremely simple experiment: you log in on your new phone, enter password and instantly see all chats.
Another simple experiment points that chats are unlikely to be even encrypted at rest is that Telegram has an extremely fast server side message search. You log into a web client, half a second later you can type a search query and uncover chats from years ago.
It kinda depends on if images and videos are encrypted separately and only indexed at first.
How much data there are on your chats? 1 megabyte is around one thick book in plaintext.
AES-CBC as example method decrypts more than 2 gigabits per second with hardware opcodes (2012 processor), for example if we look this data
https://www.bearssl.org/speed.html
It is impossible to say based on delay when searching plaintext on this level whether there is encryption.
Encryption over the wire and at rest is a basic expectancy of any web service today. They would meet that criteria just by using SSL and disk encryption on their servers. E2EE is a much stronger criteria.
> It's E2EE if you select private chat with someone.
And its not E2EE if you fail to select private chat.
What this means is that any conversations where you do select E2EE are the ones the "authorities" will take interest in, even if only to the extent of metadata.
That's the fundamental problem with E2EE-by-exception, rather than by default. It calls attention to specific data, even if its not cleartext, rather than obscuring everything.
(how) does the telegram server prevent unencrypted content?
also curious - how does telegram support encryption for chatrooms without the parties being known in advance? or are those chats not encrypted?
Telegram only uses end to end encryption for secret chats. All other chats are only encrypted on the wire with Telegram's keys. Your comment was encrypted on the wire to HN but that's not going to do anything to keep it away from the FBI. The majority of all Telegram messages are only secured by Telegram's unwillingness to cave to outside pressure. It's in plaintext as far as they're concerned.
For somebody who isnât super cyprtography-savvy, whatâs the difference between over the wire and e2ee? Does the former mean that telegram itself can read non-private-chat messages if it so chooses?
> For somebody who isnât super cyprtography-savvy, whatâs the difference between over the wire and e2ee?
E2EE: As long as it is correctly set up and no significant breakthroughs happens in math, nobody except the sender, the receiver can read the messages.
> Does the former mean that telegram itself can read non-private-chat messages if it so chooses?
Correct. They say they store messages encrypted and store keys and messages in different jurisdictions, effectively preventing themselves from abusing it or being coerced into giving it away, but this cannot be proven.
If your life depends on it, use Signal, otherwise use the one you prefer and can get your friends to use (preferably not WhatsApp though as it leaks all your connections to Facebook and uploads your data _unencrypted_ to Google for indexing(!) if you enable backups.
Edited to remove ridiculously wrong statement, thanks kind SquishyPanda23 who pointed it out.
> nobody except the sender, the receiver and the service provider can read the messages
E2EE means the service provider cannot read the messages.
Only the sender and receiver can.
Thanks! I edited a whole lot and that came out ridiculously wrong! :-)
Haha, no problem. I do that a lot too :)
Forgot to upvote you yesterday, done now ;-)
Yeah, if you connect to
and use messenger, it's encrypted over the wire because you're using HTTPS (TLS). But it's not E2EE.
Pretty much. End to end uses the encryption keys of both _users_ to send. Over the wire has both sides use the platforms keys so the platform decrypts, stores in plain text, and sends it encrypted again to the other side. Over the wire is basically just HTTPS.
over the wire is when its encrypted during transmission between the User and Telegram's servers. HTTPS or SSL/TLS, etc. At Rest is when its encrypted in their DBs or hard drives, etc. Theoretically, Telegram can still read the contents if they wished to do so if they setup the appropriate code, or tools inbetween these steps.
E2EE means that the users exchange encryption keys, and they encrypt the data at the client, so that only the other client can decrypt it. Meaning Telegram can never inspect the data if they wanted to.
I very much doubt that Telegram really does encrypt messages "at rest": their server side full text search works extremely fast.
That's a fair assessment, I didn't make the original claim, just answered the definitions of the encryption states.
I haven't dug enough to know what telegram does or claims to do.
yes. worth remembering also that even with e2ee, a ad-tech-driven company could have endpoints determine marketing segments based on content of conversations ad report those to the company to better target ad spend.
Also, as is the case with WhatsApp, they siphon off your metdata and even have the gall to make an agreement with Google to download message content _unencrypted_ to Google when one enable backups.
are you trolling? telegram (are therefore the fbi) has full access to all content of every message. unless you use private chat, which nobody does, and isn't even available on desktop. i use it. but it's about as private as discord. which is to say not at all
> the FBI's ability to _legally_ access secure content
Maybe there are laws preventing legal access to message content? Maybe related to wherever Telegram is incorporated.
> _Maybe there are laws preventing legal access to message content?_
Well sure. A lot of laws require a court order. In the U.S. that's usually not too difficult.
It helps Telegram is HQ'd in the UK and the operational center is in Dubai.
Does it? UK and Dubai are USA partners in Intelligence gathering and work together several times.
Biggest example as of late:
https://www.bbc.com/news/world-middle-east-58558690
I don't know whether Telegram is E2EE by default (probably not.) When you do a call on telegram you are given a series of emoji and they are supposed to match what the person on the other side has, and that's supposed to indicate E2EE for that call.
Verification in band seems pretty meaningless, approaching security theatre.
For voice? It's hard to fake the voice of someone you know.
you don't have to fake the voice, just mitm and record cleartext
But they have to fake the voice, if I call the other person and say "my emoji sequence is this, this and that" for the other person to verify and vice-versa.
Person A calls you. I intercept the call, so person A is calling _me_, and then I call you (spoofing so I look like Person A). When you pick up, I pick up, then I transmit what you're saying to Person A (and vice versa).
How do you know I'm intercepting the transmission? Does the emoji sequence verify the _call_, perhaps?
The emoji sequence is a hash of the secret key values generated as part of a modified/extended version of the Diffie-Hellman key exchange. The emoji sequence is generated and displayed independently on both devices _before_ the final necessary key exchange message is transmitted over the wire, so a man-in-the-middle has no way of modifying messages in flight to ensure that both parties end up generating the same emoji sequence.
I'm not a cryptographer, but that's what I glean from their explanation:
https://core.telegram.org/api/end-to-end/video-calls#key-ver...
Both connections would show different emojis on both sides then. So you would need to somehow deep fake the voice of the one telling their emojis to the other one.
The emoji sequence represents the secret key exchange between you and the other party. If you intercept the call, you are making one key exchange with person A, and another key exchange with person B. Due to the mathematics involved, there is no way for you to force both key exchanges to yield the same result.
For a "standard" DH key exchange it would be possible to brute force the emoji sequence to be the same (since it's too short to be resistant to brute forcing), but the protocol that Telegram uses specifically defends against that by having both sides commit to their share of the key ahead of time, so they cannot try different numbers.
https://core.telegram.org/api/end-to-end/video-calls#key-ver...
So person A and person B are going to see different emojis no matter what you do. To fake a phone verification while performing a main-in-the-middle attack you'd also have to fake their voices to each other. That's hard.
If i'm talking to a person I know in person, I'd recognise their voice.
Real privacy is too burdensome for most users, so they feel just fine if the service owner promises in a stern voice that their chats are _really secure_.
It is not necessary to provide real security, do fingerprint verification, etc if the users are already happy with the level of security they are promised.
The emoji comparison thing is mathematically solid. Assuming the clients aren't backdoored (and the Telegram client is open source, so that's not that easy), there is no way for an attacker to make both sides show the same emoji. If they want to convince two users that they have en E2EE connection while performing a man in the middle attack, they'd have to fake their voices to each other to change what emoji sequence they each read out. That's hard, and therefore this is real, meaningful privacy.
Telegram can potentially perform mitm at any time and generate matching emoji images for both sides of conversation, since you can't really trust the app code to be the same they put on GitHub. If you've built it yourself, that'd reduce the risk, but nobody does that because blind trust is much more easy.
It is not, by default, and none of the group chats are.
This chart is showing what messaging providers are _willing_ to give to law enforcement, _not_ a reflection of the technical capabilities of the messaging provider.
I assume what they're showing for Telegram (basically no data except IP/phone data if Telegram decides it's for a legit counter-terrorism activity) is a matter of Telegram business policy.
Signal gives the limited information they do because I assume they are subject to warrants from U.S. courts. Telegram is run, to my understanding, from jurisdictions where enforcing a U.S. court order would be difficult-to-impossible, and they keep the private keys to decrypt their stored message content split between servers in relatively non-overlapping legal jurisdictions, so even a successful seizure of data in one wouldn't be enough to decrypt message content.
That's all well and good -- and I appreciate Telegram for setting things up that way -- but that means at any time Telegram _could_ make a policy decision to cooperate with law enforcement and provide much more than what is shown on this chart. Signal, on the other hand, could choose to cooperate as much as they want but not have the technical capability to provide more information. (Barring them updating their client to intentionally build in a backdoor, etc., but I'm basing this on what the current implementation is.)
The other important thing about this chart: this is the unclassified version. Is there another classified document out there which says "we have a secret relationship with Telegram/whomever and they give us all the message content we want" but they don't advertise to the law enforcement community at large? They secretly use it to aid in parallel construction so they don't ever have to reveal that a messaging vendor is giving them message content in court? We have no idea.
tl;dr: Telegram looks great on this chart because of _policy_, not _technology_. I love Telegram, but I'm under no illusions that it's appropriate for talking about things I wouldn't want law enforcement to have access to. Luckily, I haven't found myself needing to talk to my friends about illegal activity.
>Telegram looks great on this chart because of policy, not technology.
This is what puzzles me about Apple, they absolutely have the capability to mitm iMessage pretty discreetly. Because Apple just completely hand waves away key distribution and they can silently add and remove keys at their leisure it's largely only policy that underpins their security. They're not Telegram, they aren't structured to be in a position to be able to ignore demands from the justice system to assist with some agent's latest fishing expidition. How are they getting away with not providing stuff that they obviously have access to? The PDF lists "Pen Register: no capability"
TLDR: Telegram depends on trusting Telegram. Signal is trustless.
Telegram isnât E2EE by default.
My bet is on the fact that they are based in Russia, so they donât give a shit about a US warrant or subpoena.
Telegram isn't based in Russia (anymore). The company is incorporated in Dubai since 2017 [0]. They opposed Russian warrants in the past, resulting in the blocking of the app in the territory for some time [1].
[0]
https://www.bloomberg.com/news/articles/2017-12-12/cryptic-r...
[1]
https://en.wikipedia.org/wiki/Blocking_Telegram_in_Russia
That is correct. By default all messages sent over Telegram are stored permanently in their servers unencrypted.
Not exactly. Non-secret chats are stored encrypted on Telegram's servers, and separately from keys. The goal seems to be to require multiple jurisdictions to issue a court order before data can be decrypted.
https://telegram.org/privacy#3-3-1-cloud-chats
https://telegram.org/faq#q-do-you-process-data-requests
"Not exactly" means "completely incorrect" now?
Telegram doesn't store your messages forever and they are encrypted and seizing the servers won't allow you to decrypt them unless you also seize the correct servers from another country
Of course they store your messages forever... They've kept all of my messages for over 7 years now.
If you really think that kind of shit will float...
Source for Telegram storing the information unencrypted at rest?
It is widely known and confirmed by Telegram themselves that your messages are encrypted at rest by keys they possess.
This is a similar process to what Dropbox, iCloud, Google Drive, and Facebook Messenger do. Your files with cloud services arenât stored unencrypted on a hard drive - theyâre encrypted, with the keys kept somewhere else by the cloud provider. This way somebody canât walk out with a rack and access user data.
How do they provide near-instant full text search on server side if the chats are "encrypted at rest"?
Encrypted at rest means the data is encrypted as stored on disk, not that they do not have access to the keys. That would be end-to-end encryption.
What Telegram claims to have done is set this up in a way that makes it very hard for a single party/state to get these keys. It's not possible to make this completely impossible (if you have a server processing user data, it will have the keys loaded at some point, and there is always _some_ way to physically attack it), but it is possible to make it very hard (physical tamper detection on the servers, secure boot tied to machine identity credentials required to access key material, etc - it's hard, but not impossible, to make this too difficult for any nation state to bypass). We don't know how good their set-up is, but it's certainly possible to do a good job at doing what they claim to be doing.
It doesn't matter _at all_, if you consider the risks of FBI (or FSB) accessing your chat logs. Telegram can produce your unencrypted chats to them, wether they are encrypted at rest or not.
I just don't see why they would make life harder for themselves developing stuff, given how often Durov lies. He claimed that all Telegram developers are outside of Russia, but then it turned out that they were working next floor from his old VK company office, right in Saint Petersburg.
Check the difference between Telegram and WhatsApp.
Add to this the fact that WhatsApp
- uploads messages unencrypted to Google if you or someone you chat with enable backups
- and send all your metadata to Facebook.
Then remember how many people here have tried to tell us that Telegram is unusable abd WhatsApp is the bees knees.
Then think twice before taking security advice from such people again.
PS: as usual, if your life depends on it I recommend using Signal, and also being generally careful. For post card messaging use whatever makes you happy (except WhatsApp ;-)
This isnât the case anymore with WhatsApp. Backups to iCloud and google drive are optionally fully encrypted. You have the choice of storing the decryption artifacts on facebooks servers (which are held on a Secure Enclave) or to backup the 64 character decryption code yourself.
Telegram defaults to no encryption, does not do encrypted group chats, has a home-rolled encryption protocol which almost guarantees it's weak as nearly every home-rolled encryption system always is (if not also backdoored). Coupled with it being headquartered in Russia means it is completely untrustable.
The only reason Telegram comes out on top of Whatsapp in the document in question is because Telegram is a foreign company with little interest in cooperating with a US domestic police agency; the FBI has no leverage over Russian companies.
What that list doesn't show is what Telegram does when the FSB knocks. By all means, give your potentially embarassing message content to a hostile nation's intelligence service.
> Telegram defaults to no encryption,
This is plain false as can be verified by anyone who can check Telegram GitHub repos or run the app in a debugging environment.
Telegram defaults to point-to-point encryption. Same as banks and gmail.
Fun fact: back in the days WhatsApp sent messages unencrypted (i.e. as plain text) over port 443(!).
> does not do encrypted group chats,
again, point-to-point encryption
> has a home-rolled encryption protocol which almost guarantees it's weak as nearly every home-rolled encryption system always is (if not also backdoored).
Earlier versions had serious problems. Newer versions are supposedly better.
Also there is a lot of difference between home-grown cryptography by a math wizard, made open source for everyone to inspect and various secret sauce variants.
HN has a long history of claiming it can be trivially broken, yet despite source code being available no one has done it? Lazyness or incompetence? Or maybe it isn't so simple?
I don't know but if you want to shut me up and make your claim to fame: do break Telegram cryptography. You'll do the world a service both by exposing it and by shutting up people like me.
Meanwhile, stop spreading lies. Telegram is not unencrypted. It is point-to-point encrypted by default.
If the encryption is weak, prove it or shut up.
I obviously was referring to e2ee; everything is point to point encrypted these days. e2ee is turned off by default and cannot be enabled for group chats.
I stand by my assertion that Telegram's proprietary secret encryption is nearly guaranteed to be weaker than industry-standard encryption. "Home grown is always weaker" is a well known position of almost the entire crypto community.
I further stand by my assertion that Telegram's encryption is nearly guaranteed to be backdoored, because there is literally zero reason for a startup to invest the massive engineering resources needed to successfully develop and maintain its own encryption algorithms, unless they were being paid to do so.
The NSA has a long history of backdooring private encryption technology through industry "partnerships."
Do you seriously think Putin would allow a domestic company to develop a communication tool that would allow Russians to communicate with each other in complete privacy?
> prove it or shut up.
Go read the HN commenting policy (specifically around civility) or shut up.
> I obviously was referring to e2ee;
So you admit you weren't just spreading inaccuracies you heard from someone else but you knew you were posting disinformation.
> I further stand by my assertion that Telegram's encryption is nearly guaranteed to be backdoored, because there is literally zero reason for a startup to invest the massive engineering resources needed to successfully develop and maintain its own encryption algorithms, unless they were being paid to do so.
This is a good argument.
> Do you seriously think Putin would allow a domestic company to develop a communication tool that would allow Russians to communicate with each other in complete privacy?
Telegram is not a Russian company?
>> prove it or shut up.
> Go read the HN commenting policy (specifically around civility) or shut up.
Sorry. I was too harsh. I actually regret.
Compared to willfully spreading disinformation however it seems pretty minor though?
-----
A bit more: I know local police used to use Telegram. That worries me.
It is actually even more complicated:
If Putin reads my most personal messages I don't care.
If NSA or even worse, local police actually took their time to read my messages I'd be mad or worried.
However if FSB asked for help they would need a very good reason and I'd try to consult with local law enforcement first.
If local police however asked for help I'd go out of my way to help them.
That is a lot of speculation. If you read the encryption protocol, actual methods being used for encryption are well known. Client is open source and supports reproducible builds. If there is a backdoor, it is in front of our eyes.
> What that list doesn't show is what Telegram does when the FSB knocks. By all means, give your potentially embarassing message content to a hostile nation's intelligence service.
Telegram is in a lot of trouble in operating in Russia. It was blocked for two years. [1]
If they are so co-operative, why pass the opportunity to watch on their own people. Or did they become co-operative after unblock? It seems, that they help on some level [2], but does this threaten to other countries? Hard to say so.
[1]
https://en.wikipedia.org/wiki/Blocking_Telegram_in_Russia
[2]
https://www.independent.co.uk/news/world/europe/telegram-rus...
Telegram's block in Russia was likely a very successful PR action coordinated with authorities.
It was never removed from national appstores, and Google/Apple usually comply with such requests, and the fact that it was unbanned is unprecedented.
Apple did stop updates for the Telegram. Google and Apple has weak history on compiling Russian requests. Maybe they complie with other countries more, but not Russian.
https://www.pcmag.com/news/after-almost-2-months-apple-stops...
They did have linkedin app removed, and that was a rather mild transgression against Russian laws, compared to Telegram.
LINE,telegram,threema and WeChat are not even american companies. Can't they just tell the FBI to suck a fat one when they ask for user data?
Not if they want to operate in the United States or have access to our banking system.
You donât get to pick your jurisdiction and then operate globally. Youâre obligated to follow the laws where you want to operate.
Fwiw if you want to do any sort of FX whatsoever or accept credit cards, you need access to the American banking system.
I wonder how this affects nonprofits like Matrix/Element and Signal. What can they do with them? Gangstalk their developers? Coerce big tech to ban them from their appstores?
Doesn't Signal's dev already get bothered every single time they travel?
The design of these decentralized/federated platforms is specifically so they _can't_ easily coerce their owners into disclosing incriminating information. In some sense, it's similar to how Bittorrent implicates it's users.
Refusing a valid court issued search warrant/order is a criminal offense. I think 180 days for each refusal of a legal order.
The issue is a bit more complex. I was thinking more on the lines of "will I get bothered for making crypto available for the masses that nobody can crack?"
IRRC Didn't they jail a guy for that?
Well signal does not have the data, they comply with such orders with the tiny amount of metadata they have (like a timestamp of when your account was created and thatâs about it)
Telegram, as I understand it, can access your messages when stored in their Cloud[1]. They just make a choice to not provide the content of those to anyone.
[1] -
https://telegram.org/privacy#4-1-storing-data
Yes but they can still:
- block the company nationwide, see Huawei (also includes ceasing contracts with app stores, both of which are American)
- block access to your website, see TPB
- harass you when you travel, either to the US or fellow countries.
Depends on whether the countries these companies exist in have agreements with the U.S for surveillance and stuff
Do you want this to stop? Raise awareness, add this to your mail sig:
> This electronic communication has been processed by the United > States National Security Agency.
If it makes people uncomfortable, GOOD. Pretending that your mail - and their mail - is not being accessed is not the way to resolve this uncomfortable situation. Ending it is the way. And that demands awareness.
I'm wondering how this was obtained, and how old this is?
For WhatsApp:
if target is using an iPhone and iCloud backups enabled, iCloud returns may contain WhatsApp data, to include message content
Probably not true since WhatsApp launched encrypted backups.
I mean the document says the data is accurate "as of November/2020", and the slide was prepared 7-Jan-2021
WhatsApp has its own backups in addition to regular full-phone iCloud backups.
If WhatsApp does not encrypt its content at rest locally, then âiCloud backupsâ still contain everything unencrypted too.
Reading the document answers this for you: It is a declassified government document originally produced by the FBI and was prepared on Jan 2nd, 2021.
Now I just have to get my friends and family to use Signal.
I've had surprisingly good luck with strong-arming people into switching. The important part is having their trust, if they don't believe you they won't listen. The next part is to make simple, verifiable, and non-technical arguments for switching. Believe it or not, almost everybody is willing to take small steps if they're free.
Instead of rambling on and on about "end to end encryption" or "double-ratchet cryptographic algorithms" or other junk only nerds care about, approach it like this:
* There are no ads, and none of the messages you send can be used for advertising
* It's not owned by Facebook, Google, Microsoft, or any of the other mega-corporations, and you don't need an account on one of their sites to use it
* It will still work great if you travel, change providers, etc
* It's much safer to use on public Wi-Fi than other services or SMS
Honestly, don't even touch on law enforcement access as in the OP. That can strike a nerve for some people. The best appeals are the simple ones.
Also, a big one that works for me (especially iPhone users, which are the hardest to convert): "You can send full quality images and videos to Android users." The fact that Apple shots themselves in the foot is an advantage to Signal.
Thatâs not Apples flaw, itâs a flaw with SMS. It can only handle file sizes up to a certain limit, and during periods of congestion they lower that limit.
Itâs also an issue the other way around. MMS is the limitation.
The best advice I have to give to get people to switch is showing that you have cross platform capabilities. Essentially everyone can have the features of iMessage/WA: full resolution images and videos, responding to messages with emojis (WA doesn't have), stickers (unfortunately you have to grab from signalstickers.com instead of in-app), voice and video calling, etc. If Apple didn't have such a closed ecosystem then I think it would be harder to get people to switch. In this respect, Signal is more feature rich than anything else (except Telegram, but Telegram doesn't have the same security and isn't trustless).
I think the common mistake is trying to convince people with the security. Use that as a bonus, not the main feature. You're talking geek to people that don't speak geek (convince geeks with these arguments, not mom and dad). I also suggest strong arming people and using momentum (if 4 people in a group of 5 have Signal, switch the group to Signal. Or respond to WA messages on Signal).
I switched to signal and got few people to switch too, then they started their shit coin(MOB). IMO Signal Messenger is just a way for that company to reach their shit coin goals. Uninstalled and never recommending that again.
I remember many people being pissed off when these features were announced some months ago.
As far as I can tell, nothing really happened afterwards. I use Signal on a daily basis and haven't noticed any coin-related functionalities. Either they were canceled, haven't been released yet or they're just buried somewhere deep and not advertised.
Do you have a different experience?
MOB is in beta and I think getting moved (if not already) to main soon. But it is non-intrusive and you won't notice it unless you look for it. People are just complaining about a feature that you have to look for. I'm not a fan of MOB and how the situation was handled, but I also think the reactions people are having are a bit over the top.
It's in beta, you can enable it in settings.
It's still a pain to buy MOB in the US so it's not that usable in the states. It would have been interesting to me if they just used Zcash instead of rolling their own, but I'm not sure what's supposed to be special about MOB vs. Zcash.
I also don't think it's that big of a deal.
I'd love Zcash (forced private transactions). But honestly I'd also like if we could use different currencies. My dream was that you could send cash and they would just use MOB as the intermediate transaction (so your bank would just see a transaction to/from Signal and not who you were sending/receiving to/from). But that also has technical challenges and legal issues so I understand why not. I think a multi-currency wallet is the next best option imo.
Yeah, my long term hope for this stuff is that Urbit succeeds and then a lot of the UX here gets fixed by that and all of these apps become redundant and unnecessary. I'm definitely in the minority there but I think there's a future path where that's possible and works well.
Are you sure you're not thinking of Telegram? They had a thing called Telegram Open Platform or something (TOP rings a bell for some reason)
Signal has some coin
https://support.signal.org/hc/en-us/articles/360057625692-In...
I think it's
I have been trying to get people to install signal for 2 years. No one has budged.
The day facebook went down for some hours I got phone calls.
I have had some success. It helps that many of the people I regularly contact were willing to migrate, even after some time. Most already used WhatsApp, so the friction to installing a new app was less than someone not accustomed to using a dedicated app for messaging.
But most of my American friends that don't have international contacts still just use SMS because they are not really accustomed to an app such as WhatsApp and so on.
It's incredibly disheartening how difficult it is to get most people to care about digital privacy.
Even if you do it is pretty much impossible to get them to check their safety numbers and keep them checked.
I wonder where Matrix/Element would fit into this chart.
Yeah I was thinking maybe some of the most secure platforms aren't on the list.
Briar was another one.
The way these became bullet points on the slide is ~
An active investigation leads an agent to a suspect known to have used one of these applications
An administrative subpoena is issued to the company asking for what information is available
The company is then ordered by a federal judge to provide information related to a particular account or accounts
The company complies.
This is why it is important to understand how your messaging service handles data and how you can compromise your own safekeeping of all or part of that data.
It's kind of funny that WeChat seems pretty locked-down to the FBI, especially for Chinese citizens. Makes sense, really, but still funny.
Like Telegram, they simply donât care about US warrants.
Well, who cares when all they need is to use something like Pegasus to obtain full access to your phone simply by sending you a WhatsApp message (without having you even open the message).
Knowing how well guarded IOS is against app developers, I wonder what kind of zero-day would suddenly turn a message received in WhatsApp to full system access. I think NSO found a WhatsApp backdoor, not a zero-day bug.
NSO can't send you an WhatsApp message if you don't have WhatsApp on your iPhone.
Whatsapp is owned by facebook, not by apple. I don't think Apple wants to share a backdoor with facebook.
I don't know any detail of the whatsapp vulnerability that NSO exploited.
Or compromise the device in some other way.
At the risk of being cliche here's a relevant xkcd -
Can't the FBI get chatlogs from WeChat?
https://www.youtube.com/watch?v=N5V7G9IBomQ
In the short documentary that the FBI made about catching Kevin Mallory they mentioned catching him sending classified stuff via WeChat.
I use LINE a fairbit, have a number of Japanese friends as well as friends that have traveled to Japan. I had no idea they had implemented much better encryption [1]. I'm convincing all my contacts to turn on the option now.
[1]
https://engineering.linecorp.com/en/blog/new-generation-of-s...
It's about time they implemented 1970's encryption technology. It should be on by default.
I'd like to see Tox and Jami.
I read somewhere that Tox's security was compromised.
I'd like to see that. Was it not fixed?
I found
https://media.ccc.de/v/rc3-709912-adopting_the_noise_key_exc...
which might be it.
The funny thing is that sometimes when I search for Arabic words about Islam I get results for some old and usually the extremest books on CIA library (direct links to PDFs) which I wonder why?
Isnât this simply imaginary, where in practice all the FBI has to do to up the ante is to request military-grade interception from a willing foreign counterpart?
The point of promoting and using privacy respecting software is not necessarily to make it _impossible_ for law enforcement to get what they want. It's to make it somewhat expensive and require targeted probes.
You simply want it to be cost prohibitive to engage in mass surveillance on everyone, because that is an immensely powerful tool of totalitarian oppression that get really bad if we happen to elect the wrong person once.
I don't care if they spy on _me_; they probably have a good reason to! But I do care if they spy on _everyone_, so I make it hard to spy on me.
I agree with you on the level of my person, and naturally flag that this economic argument is extremely poor policy. Itâs quite unclear that the marginal cost is non-zero, or even flat by person. One might reasonably conclude we are already each inside a high-resolution springing trap, waiting for the moment we find ourselves athwart the powers that be. Imagine in physical space where the local police could simply call in foreign air strikes upon domestic citizens, with only economics to prevent otherwise. We must have transparent and firm laws, reformed at a fundamental level.
Can't the FBI do a Pegasus style remote access thing on an appropriate warrant themselves?
Seems like it. And can they also do it without an appropriate warrant [by asking someone else]?
Tier of which requires....expense.
What about regular text messages?
If I remember correctly standard SMS has no security on it at all and is in the clear during transit. I may be wrong and never scared of being corrected.
Assume anything sent over a cellular network carrier via normal SMS can not only be retrieved, but intercepted.
Thank goodness a lot of companies regularly use it for 2FA.
IIUC, they can get 7 years worth of SMS/MMS (including contents) with very little effort/cost.
Telecommunication is highly regulated. They have to keep records for a long time and make them available to law enforcement.
They've been inside the phone companies for a long time so I assume they have full access to SMS.
PSA: This is a year old.
I do DIY encryption with enigma reloaded and it works
The list doesn't have ANĂM
is this only page two of an alphabetical list?
or are there no messaging systems with a name before 'i'
WhatsApp. -> FBI
Telegram. -> KGB
Signal. -> The rest of us?
Didn't Russia(KGB) try to block Telegram in the past and were unsuccessful? I feel like they are fairly safe and trustworthy. Of course, I like Signal best, but Telegram has so many nice features.
Signal -> Mossad
What about Snapchat?
Snapchat?
i use enigma reloaded to manually encrypt my messages
link seems to be broken
https://propertyofthepeople.org/document-detail/?doc-id=2111...
Not only is there main link broken, but their silly PDF reader is broken for me.
Here's a direct link to the PDF:
https://assets.documentcloud.org/documents/21114562/jan-2021...
The FBI is monitoring users - it's not a secret for a long time. Enough information has already leaked to the network to make it clear that this is not just a conspiracy theory. But, to be honest, I was surprised that each of the supposedly reliable messengers leaks data to the authorities. I was also surprised that I did not see the Utopia P2P
in the document. Maybe the only reliable application is one that stands for freedom of speech and anonymity and does not obey the authorities?
fuck the police
Dupe of
https://news.ycombinator.com/item?id=29394945
and
https://news.ycombinator.com/item?id=29394945
.
Now I just have to get all my friends and family to use Signal.
My family has really taken to it. Granted it's mostly just message family app to them, but they are very not technically fluent but yet seemed to have picked it up just fine.
I really think this is not discussed when hacker news brings up secure messaging. The user experience is so much more important than the underlying tech. My family doesn't care about end to end encryption. They care about video calling with the press of a button, and easy features that are just there and work like zoom or the many other software products that they have to use work.
Thank you Signal team for focusing so hard on the user experience.
If this is what it takes to keep us safe, I and most americans are ok with it. We live in dangerous times.
The US has a balanced criminal justice system -- as long as due process is preserved privacy from the state should not be a major issue
Current U.S. "due process" includes national security letters and other secret legal requests and secret courts to approve those requests. So there are still some checks and balances but it's less clear that they are working well enough or as intended.
Just look at the transparency reports of major Internet companies; they can report numbers of (certain types of) requests and that's about it. Mass surveillance under seal is not a great trend.
When political parties start advocating for jailing political opponents and treating the supreme court as political office for nominations, I find it harder to trust the current due process.
How does them reading my messages keep me safe, though?
In that example it's keeping me safe from you
Ok, let's turn it around and say they would keep me safe from you. Why would they? What's their motivation to keep ME save from YOU? Are you even a threat? Am I a threat? And would a real threat even be caught by this system?
> _if it's not we have bigger problems anyway_
Really, we shouldn't do anything; we have the bigger problem of the eventual heat death of the universe.
Take into account not only the size of the problem, but how easy it is to do something about it.
This discussion is not very interesting from a security perspective. I tuned out at âcloudâ.
If itâs not in your physical possession, itâs not your computer. If itâs not your computer, then whoever administers the computer, or whoever [points a gun at/gives enough money to] the administrator of that system can access whatever you put on that system.
If a âcloudâ or âserviceâ is involved, then you can trivially use them to move or store data that you encrypted locally on your computer with your key that was generated and stored locally and never left your system. But subject to the limits above, the administrators of the other computers will still be able to see metadata like where the data came from and is going to. And they might be able to see your data too if you ever (even once, ask Ross Ulbrecht) failed to follow the basic encryption guidelines above.
You can make metadata access harder via VPNs and Tor, but you CANNOT make it impossible- in the worst case, maybe your adversary is controlling all the Tor nodes and has compromised the software.
Which leads me to my last point, if you did not write (or at least read) the code that youâre using to do all of the above, then youâre at the mercy of whoever wrote it.
And, if you try to follow perfect operational security, you will have a stressful and unpleasant life, as itâs really really hard.
> if you did not write (or at least read) the code that youâre using to do all of the above, then youâre at the mercy of whoever wrote it.
It's worse than that. Even if you read the code, you have to trust that the code you read is the code a service is actually using. Even if you deploy the code yourself, you have to trust that the infrastructure you're running on does not have some type of backdoor. Even if you run your own infrastructure, hardware can still have backdoors. Of course, the likelihood of any of these things actually becoming a problem decreases significantly as you read through the paragraph.
> _the likelihood of any of these things actually becoming a problem decreases significantly as you read through the paragraph._
And yet, "likelihood" doesn't necessarily mean "hasn't been done".
Just look at:
* [0]: Intel ME
* [1]: Solarwinds attack and CI systems
* [2]: Ubiquiti attack and complete infrastructure compromise
* [3]: And the famous Ken Thompson statement
[0a]:
https://news.ycombinator.com/item?id=15298833
[0b]:
https://www.blackhat.com/eu-17/briefings/schedule/#how-to-ha...
[1]:
https://www.cisecurity.org/solarwinds/
[2]:
https://krebsonsecurity.com/2021/04/ubiquiti-all-but-confirm...
[3]:
https://users.ece.cmu.edu/~ganger/712.fall02/papers/p761-tho...
Indeed. I mentioned those specific things because it has been done. However, I think the likelihood of the average user being affected by things near the end of the list is generally quite small. If we aren't willing to accept this, at some point, we can't use technology for anything important.
> Even if you read the code, you have to trust that the code you read is the code a service is actually using.
Don't forget to verify the code for the compiler to ensure that hasn't been compromised in order to inject an exploit into the binary at compile time.
If i'm reading this page correctly, AMD is working on something that would allow you to run trusted code that not even someone with physical access to the hardware could read (without breaking this system).
https://www.amd.com/en/processors/epyc-confidential-computin...
And this tech is already implemented by GCP:
https://cloud.google.com/confidential-computing
> With the confidential execution environments provided by Confidential VM and AMD SEV, Google Cloud keeps customers' sensitive code and other data encrypted in memory during processing. Google does not have access to the encryption keys. In addition, Confidential VM can help alleviate concerns about risk related to either dependency on Google infrastructure or Google insiders' access to customer data in the clear.
Then you only have to trust that AMD did not accidentally or intentionally introduce a bug in the system. Remember Spectre? Remember all the security bugs in the Intel management code?
You also have to trust that AMD generated and have always managed the encryption keys for that system properly and in accordance with their documentation.
And are you even sure that youâre actually running on an AMD system? If the system is in the cloud, then itâs hard to be sure what is executing your code.
And are you sure that your code didnât accidentally break the security guarantees of the underlying system?
I have worked on all these problems in my day job, working on HSMs. At the end of the day there are still some leaps of faith.
_puts on tinfoil hat_
You'd also need to consider AMD's management engine, the Platform Security Processor. If we're really slinging conspiracy theories, AMD processors are likely just as backdoored as Intel one. I don't mean to be grim, but I think it's safe to assume that the US government has direct memory access to the vast majority of computer processors you can buy these days.
[/conspiracy]
if you're going to that level, then have a look at five-eyes (and it's derivatives)
https://en.wikipedia.org/wiki/Five_Eyes
/ Echelon
I probably shouldn't have removed my tinfoil lining yet but yes, you're correct. Any information the US government has access to through these channels is also probably accessible by our surveillance/intelligence allies. It raises a lot of questions about how deep the rabbit hole goes, but I won't elucidate them here since I've been threatened with bans for doing so. I guess it's a do-your-own research situation, but always carry a healthy degree of skepticism when you read about anything government-adjacent.