💾 Archived View for dioskouroi.xyz › thread › 29371088 captured on 2021-11-30 at 20:18:30. Gemini links have been rewritten to link to archived content
-=-=-=-=-=-=-
________________________________________________________________________________
My main OS is Fedora Silverblue, which is "an immutable desktop operating system". I install GUI software through Flatpak. For development, I run a VM (Fedora Server) and connect to it through SSH (VSCode works really nicely here). I have different VMs for different use-cases, but mostly I work in just two "fat" VMs. I try to be diligent in what I install and use in the main OS as well as in the VMs.
It's not entirely safe, but I think gets me 90% of the way to a reasonably safe workspace. If there is malware in a VM, I can nuke it and reset affected credentials in my main OS (which is not infected). It's not too much extra overhead, I just SSH into the VM and work as usual. I've used Qubes before and have also tried a fully Docker-based workflow (developing exclusively in containers), but there can be too many headaches with either.
I like your setup. It's important to point out to users that there is no such thing as "safe" in the absolute sense, only degrees of safety.
I think that a combination of what you're already doing and living a few releases behind the latest is about as safe as we should hope for on our personal machines.
GUIX and NixOS can similarly create an immutable root system + packages. System and app state is still stored in /var and /home/me, but the system and installed packages are unalterable.
There was a good blog post a few months ago on further hardening NixOS:
https://christine.website/blog/paranoid-nixos-2021-07-18
I always get the impression an immutable OS is solving the wrong problem. It protects the OS - all the data I don't care about - but does nothing to protect /home/me, which is everything I do care about.
How is this solved?
Agreed entirely. It wouldn't even solve the attack described in the post; a malicious VSCode plugin (as that would be ran from your /home folder somewhere.
It feels so many security ideas fail to protect the /home dir which contains 99% of important data that needs secured, and instead focus on the OS. I understand that for servers and hypervisors, but it feels like so little effort goes into desktop OS on this.
Basically, desktop OSes need to become more like iOS and Android in how they ask for permissions and segregate apps from each other.
macOS does _a little_ of that, although limited to certain "special" folders (Documents, Desktop, Downloads, ...) and people call it macOS Vista for it.
Note: I like that it does that and would like this capability to be expanded and refined.
It's a start for sure but there is so much low hanging fruit here.
Eg a warning/prompt if a program tries to read/modify/delete more than x files in your home directory (not including its own config files). This would stop an entire class of malware scanning for passwords or secrets in your home folder. It could even if implemented right stop some of the ransomware attacks after a few files.
It seems crazy that so little effort is put into this kind of thing vs the crazy level of effort that goes into hardening the OS.
Prevention and detection of tampered files in /home/* is definitely a major aspect of 'Securing the workstation'; As attacks are not just for exfiltration of data and could very well be to place .murder_VIP.doc; There seems to be a need-gap[1] for solutions in this space.
But considering immutable OS increases overall security of the system and that the system/software likely has to be compramised first to tamper /home/* I think they're useful to prevent this attack as well.
[1]
https://needgap.com/problems/188-proving-computer-hack-proof...
(Disclosure: I run this problem validation forum).
Correct. Immutability does very little for security: attackers are interested in reading your files, extracting your authentication data e.g. session cookies and little else.
What really helps is application sandboxing, e.g. with firejail.
Fantastic workflow!
Is there anything immutable like that on the Debian side?
Half a year ago it looks like there wasn't anything
https://www.reddit.com/r/Ubuntu/comments/lbyo9o/is_there_any...
No, as far as I know. I haven't used Debian-based distros in years, so I'm unsure of the situation there, to be fair.
There are other approaches, e.g. Guix and NixOS as mentioned by others, but you also can get most of Silverblue's benefits from BTRFS/ZFS and a package manager that can create snapshots. Not sure if apt integrates like that.
For me, Silverblue just enforces good defaults: the base OS is as simple as needed and I only install necessary applications through Flatpak, which has some security benefits over direct package installations. VMs are for more involved work with potentially risky software (i.e. development environments with random NPM dependencies).
Silverblue sounds promising. "Entirely safe" is just shy of impossible, but I feel like a reasonable and responsible degree of security should be possible without too many sacrifices.
Silverblue has its own restrictions and might not be ideal for your specific workflow, but it works for me as a sort of base OS.
There's a lot more that could be done to advance the security aspect of Linux. Flatpak is an improvement as it requires applications to declare the permissions they need, however it still has a ways to go. I wish there was a tool to monitor if some random process is accessing folder X or has high network usage, a'la Little Snitch on Mac. Linux desktop doesn't have a ton of manpower behind it though, so it all takes time.
Silverblue is the most promising development in the Linux space. I used it for years before switching to a MacBook. Sadly too many Linux users are stuck in the mindset that you should audit the code of every app you run and that the user/root split is all that is needed for security.
Are you running Silverblue on Mac?
I have a Thinkpad.
Let’s hope the vulnerability isn’t injected into all the deps your OS pulls down to run the VM
Physics informs us a perfectly secure system is impossible.
We need to socially accept it rather than make ourselves paranoid.
It's not all or nothing. We don't need to give up because some aspects of a system are insecure. There are different threats and different answers.
Qubes got it right in its philosophy. We either get system secured by correctness or by isolation. The former is not possible on a Linux developer machine.
You can never have a totally trustless system, but I think we could trust our applications a lot less than we trust them now.
There's a middle ground between accepting and paranoia: secure to best effort.
With modern software distribution infrastructure you are entirely fucked at this point. It’s number one business critical risk from our analysis. And it’s not just from developer workstations but from stuff sneaking into production during a build cycle as well which is flagged through or missed by any of the participating tools which are supposed to stop stuff going through. The developer workstations aren’t even a worthy target.
Imagine if someone managed to inject malware into a core kubernetes component which gets pulled from docker hub or something.
The only reasonable way to prevent this is do development work in a fully constrained environment both from a hardware and software perspective and that means taking on a hell of a lot of compromises which will cripple your productivity entirely.
My efforts to investigate this pretty much lead to the conclusion that you need to have two computers available attached to different physical networks. The first computer has internet access and allows things like email, www access but has no administrative capabilities and no development tools installed and no way of installing tools as an unprivileged user. The second computer is the only one you are allowed to do development work on and has no internet connection.
Obviously when proposing this, it was laughed out of the room. This is exactly what I intended to prove: you can’t fix this reasonably at this point so don’t bother doing it.
At least not unless you have an airgapped machine and solely write in something standardised, with no external dependencies or libraries and no possibility of pulling something from outside your trust boundary.
I hope you sleep better than I do knowing this as well.
Edit: I was actually most happy writing C in an airgapped network about 25 years ago on a Sun machine with some manuals and some Oreilly books on my desk. The very thought of downloading something there was laughed out of the room. I wonder what they do now.
I actually worked as a software dev at a genetics laboratory a few years ago where there was a complete network split implemented matching your description.
All resulting research data was on the airgaped network where each terminal was heavily restricted in what it was allowed to do and by whom.
We the software devs were on the _open_ network, with less restrictions. Due to the concern of a supply chain attack there was a heavy feel of _not invented here_.
Over its ~25 years of operation the lab has created immense amounts of internal software from scratch.
It was an interesting look into a small dev team that had mostly never worked anywhere else (yes, the code was often very messy), and had a xenophobic view of off-the-shelve software and solutions.
I have worked in the defence sector on similar things and agree with the not invented here thing. At the same time it breeds tools out of necessity. My only tale on tale is that I pretty much designed and built ansible in Perl in 1999.
Of course, young and inexperienced, I did not see the potential preferring mild alcoholism and socialising. Doh!
You're not the only one who built a configuration management system before Ansible; CFEngine's first release was in 1993, for example. Ansible's success was its appeal to people who did not think of themselves as traditional programmers.
afaik: No K8s core components are containers that come from a registry, see below. Random app/container running in your orchestrator could have sketchy provenance, sure.
https://kubernetes.io/docs/concepts/overview/components/
>Imagine if someone managed to inject malware into a core kubernetes component which gets pulled from docker hub or something.
Digital signatures are a typical way to protect against this. Pretty robust as far as I know.
Digital signature tells you that the owner of the key signed it. Not who currently owns the key.
Some scattered thoughts on this:
What's the bar for reasonable security, in your opinion?
That really depends on your situation.
Do you think somebody might target you, specifically? Has your machine been compromised before? Do you do anything with potentially high leverage for an attacker?
If the answer is "no" to all these, I'd say don't sweat it too much beyond standard "best practices".
> * Don't always immediately updated to the newest version. If it's compromised, give the other users and the vendor time to find the vulnerability.
It always feels like advise that relies on not everyone following it is cheating just a bit.
Different actors have significantly different security preferences / concerns and risk/reward profiles with respect to new features. Almost everyone could be following it -- everyone, if you trust the vendor actually dogfoods -- and it would still work.
I wait a few months. Someone else only waits a week.
It itsn't depending on everyone not following it, it is just incomplete; you should review the code or wait for someone you trust to review the code or ,,,do what GP recommends.
Updating to the latest version and running it shouldn't be how you review them for dependencies. You can follow the development of a few packages you care about and review them immediately, while not blindly running cutting-edge everything at all times.
I don’t think this is the right question. How do we secure our code, our data, and our supply chain is the right question. Most of our work environments span multiple endpoint hardwares and virtual instances and our data is spread across these devices and cloud services. It’s a harder problem but let’s start by defining what we’re protecting and then work it.
As a starting point I think crowd wisdom is called for given the size of the challenge this would be for an individual. If you see something, immediately say something. Responsible full disclosure on tight timelines. Ways to rapidly get the message in front of those impacted where action by them is needed. Build systems to avoid requiring action from those affected without compromising freedoms.
In general, we will always start from a position of considering a developer machine to be infected. This is part of the Zero trust approach to security. We work with defense in depth. If the developer machine isn't trustworthy, and the developer isn't trustworthy, how do we best protect our systems and client data?
As you move through from code to production we have multiple stage gates and steps.
- From a code perspective, we use dependency and code scanning (yarn audit, sonarcloud, sonarcube etc). Sonarcloud has nice IDE integrations.
- Code is pushed and is picked up by a pipeline, further scans are done looking for vulnerabilities/CVEs etc. If any significant ones are found, the pipeline fails (yarn audit, sonarcloud, sonarcube, Palo Alto container scanner, docker bench etc)
- The pipeline deploys to test and does automated checking
- Prior to a production deployment, the pipeline must be manually approved.
- Once in production, we use further scanning and monitoring (Security Hub/Centre, Tenable, SIEM)
Our developers have no direct ability to change the production systems in any way. But, they can write code and commit to our Git repository as much as they want. Everything from that point is automated (except for manual approvals).
Some things you can do to mitigate the impact of a developer machine compromise:
a) Don’t let anyone push to master. Everyone goes through code review.
b) Limit access to production. If you’re a shop with a separate ops function, none at all. If developers do their own basic ops, then limited and structured control surfaces for them. Choose from among the code reviewed builds to deploy, that sort of thing.
c) Where relatively high privilege in production is required, provide dedicated workstations just for that. These don’t need arbitrary local software or even internet access generally, just the production VPN.
At first I was going to say (a) only works if everyone does that, or you refuse to use anything that doesn't (which I suppose is slightly easier than auditing everything you use...)
But after reading (b) & (c) I think you're answering from a business perspective about this risk on employee machines? I read OP's question to be more about employee's own risk, or if the answer's 'work machine is for work only' then what about the personal machine?
IMHO the same reasoning applies for the personal machine - you can _reduce_ the risks quite significantly, but you should expect that you won't be able to fully secure it and after the basics (like keeping up to date with patches and not running unusually untrustworthy software) are done, the next most effective steps would be more about mitigating the consequences in the case the machine is compromised, not expecting to be able to prevent it from being compromised. It's not a _common_ occurrence to get your machine owned, but it's also not entirely preventable, some risk is always there.
Like, for personal use - have offline backups that protect you if your machine gets ransomwared; don't keep the keys to your cryptoinvestments on the same machine as the VSCode plugins OP mentioned; etc.
Yeah I was more worried about an attacker getting my bank account details than an attacker compromising a codebase I work on.
I use Windows as my main OS. In Windows 10, there is a feature called Unified Write Filter that essentially resets your computer after every reboot [1].
Unified Write Filter (UWF) is an optional Windows 10 feature that helps to protect your drives by intercepting and redirecting any writes to the drive (app installations, settings changes, saved data) to a virtual overlay. The virtual overlay is a temporary location that is usually cleared during a reboot or when a guest user logs off.
When I first got my laptop, I installed a fresh clean copy of Windows 10, installed all my commonly used applications, configured all my settings, and then enabled UWF. On every reboot, it goes back to this clean snapshot, no matter what I do - And reboots are quick too (~10 seconds).
I like this setup because I'm never worried about making changes to my computer to try them out (installing a new program, configuring obscure settings, etc). If I don't like it, I can get back to my fresh state with a simple reboot. I also like that the feature is built into the OS - there are similar third-party solutions such as "Reboot Restore RX" [2], but I don't trust these as much, and they're not as clean as UWF.
The only downside is when you _do_ want to persist changes to update you have to disable UWF, reboot, make your changes (such as Windows Update), enable UWF again, and reboot. But I seldom have to do this. I treat the OS as pretty stateless and keep all my personal files in a separate bitlocker-enabled partition that isn't subject to UWF.
[1]
https://docs.microsoft.com/en-us/windows-hardware/customize/...
[2]
https://horizondatasys.com/reboot-restore-rx-freeware/
How hard is it for a program to disable UWF or write directly to the "immutable" partition? Is it still a single UAC popup away (or untrusted command in an admin terminal)?
I suppose a malicious program with elevated admin privileges can Disable UWF (it can't be disabled without admin) - but it wouldn't apply immediately - only at the next boot. And at this point (next boot), the drive would still go back to the "clean" state again so the malicious program would presumably be gone and couldn't do more harm.
Albeit in this scenario, UWF is now disabled, so future writes go directly to the disk until UWF is enabled again.
how did you get an enterprise version of windows 10? Asking for a me. Did you just go for an e3 subscription or is there a better way?
I'm actually using Windows 10 Education, which also supports UWF. My university allows alumnis to keep their *.edu email so it was pretty easy in my case
How do you secure your workstation without living like a monk?
Using Qubes OS. It's really easier than you might think. The UX is amazing. Can't recommend it enough.
Not too familar with Qubes, but it looks like it offers security by using virtualized domains (e.g. one for your dev environment, another for personal browsing, etc.) and the interactions between the domains are controlled. However, in the OP's case of a VSCode plugin containing malware, his entire dev domain will as a result be compromised as a result of this plugin. If he uses the same VSCode plugin in his personal domain as well, that will also get compromised.
I wonder if an approach similar to Firejail would work better, by say severely restricting what VSCode can do (e.g. what directories it can access, controlled network access, etc.).
Yes, Qubes provides security with virtual machines. You can compartmentalize you dev environment further: develop different things separately and use disposable virtual machines for most untrusted stuff. Also, you can simultaneously harden your development VMs.
When I've been on video calls with people using Qubes (admittedly rare, since few users), they inevitably can't get video or sound working or have to wait 5-10 minutes into the call for them to re-enable some system access to drivers for the window for the video call.
Not a problem for a developer who mostly communicates internally with a singular video call setup that you can optimize. But impossible for a founder, executive or salesperson that constantly has to be interacting with every video/call system in existence and doing screenshares and everything.
There's no reason why everything should happen on the same machine, hardware is relatively cheap. Don't shit where you eat; don't do your web browsing / videoconferencing / exploration of interesting new tools on the same machine from which you have privileged access to sensitive things. Virtualization is an option, but just buying an extra laptop might be more straightforward.
_> Don't shit where you eat_
1) Chromebook for web, social media. Separate profiles for personal trusted (banking), personal untrusted (browsing and social media), office/client work.
2) Separate laptop as VM host for development. Never pollute the host with third-party libraries or anything downloaded from the web, all that happens in disposable guests.
3) ssh and sftp from Chromebook to VMs as needed.
I’ve been thinking about something similar. I have an old laptop that I’ve been thinking of throwing proxmox or something similar on and just hosting VMs for my “daily driver” laptop to connect to
I find myself frequently wanting to screen share the things that I’m actively working on. I suppose I could use HDMI capture and confine myself to sharing only full screens (which might suck for viewers at 4K res or require me to switch to 1080p on that screen), but for a lot of people there is perhaps more overlap between video conferencing and work than full segregation would support.
If you're on Linux, xpra might be an option here. Just stream the rendered window from your sensitive access computer to wherever you need to show it.
Video calls work fine for me. It's all about the Linux-compatible hardware. Here is a good list:
https://forum.qubes-os.org/t/community-recommended-computers...
.
Apart from that, on Qubes you need to manually choose which VM will have access to a microphone and camera, so it may take an additional minute (but not ten minutes!). Persistent connection to a chosen VM should also be possible.
Yeah, the more I think about it, the more it seems like this might be the way to go. What are the drawbacks? I know GPUs are hamstrung, so gaming isn't viable. But I can use another machine for that. What else?
The main challenge is that you need to adjust your workflows, making them separate for separate security domains. For example, never run untrusted application in a trusted banking VM etc. Otherwise there is this:
https://forum.qubes-os.org/t/major-ux-pain-points
. Many of those pain points are solved by Qubes 4.1, which is not released yet, but there is a release candidate:
https://www.qubes-os.org/news/2021/11/17/qubes-4-1-rc2/
.
> I know GPUs are hamstrung, so gaming isn't viable.
Technially, you can also make a GPU passthrough.
We could trust nothing beyond our base system and our browser, and refuse to use any code we don't fully audit, but this would be an impossibly austere way to live. [...]
The alternative is sandboxing, using a lightweight option like firejail (which I use) or a totalizing system like QubesOS.. But these systems are awkward to use, and have their own drawbacks.
I am somewhere between these two options, to be reasonably secure without experiencing too many drawbacks: All my software is installed either from Debian repositories, or compiled myself and ran as a application-specific unprivileged user, with no access to X/Wayland when possible. (You could allow yourself to download binaries, but source makes me feel somewhat safer.)
I also run Firefox and VLC in Firejail because they are complex pieces of software that deal with lots of untrusted input, and need access to X/Wayland.
The solution I am still working on is a xyte.ch x330 thinkpad that has been modified to remove bluetooth and mic as well as change the wifi to an atheros card (free software...)
It is also flashed with HEADs and I have a secret on a smartcard (usb stick) that I can use to sign my boot partition.
On the boot partition there will be a minimal system that lets me decrypt my hard drive and boot into my desired guix system generation. The boot partition is signed so it should never change (especially not every time you update your system configuration).
Guix allows you to bootstrap from a minimal seed so once I finish making the software I need for the bootstep I will set this up on that laptop by bootstrapping guix.
For me it's all about trying to go closer to the foundation a la precursor; rather than a theatre like qubes where the complexity is just too much.
In the not-as-distant-as-it-may-seem future I will probably try genode on a pinephone or a laptop and maybe it will be useable and robust..
Seems like a problem with VSCode.
Write more of your own code. If your app is made up of more packages and dependencies than you can audit then you're doing it wrong.
Consider OpenBSD.
> If your app is made up of more packages and dependencies than you can audit then you're doing it wrong.
My current employer has a policy where every dependency needs to be part of the software BOM (except it's FAR more comprehensive than what passes as an SBOM in the industry, it's an excel sheet that goes into the double-letter columns, including, among many other things, a rationale for why you're using $thing) and signed off by legal (a process taking some time). It's kinda irritating to do, but it also opened my eyes how completely unauditable e.g. npm-based projects are. Not that I had a high opinion of npm before. The other day we had a thread here with a similar topic and someone said "No one knows how to do builds without the internet", someone else chimed in saying that Flutter (or some other framework) actually can't do offline builds; pip is actually somewhat similar, as PEP517 causes it to try and run for PyPI even when installing packages purely from a local source; this can be easily disabled though.
Those things are _utter insanity_ to me. You have no control. You have no idea what code you're running _now_, let alone tomorrow. Your builds will never reproduce, and your CI is going to fail whenever some random cloud webshit goes down.
Same for VSCode btw. ... it's not even shared source.
I totally agree. I am bewildered by the large number of developers working in such blind, insecure environments. Entire tech sectors that grind to a halt when github does down. It is insanity.
We all talk about unit test coverage, end-to-end testing, hiring the right people (leetcode hell treadmill) all while we slide absolute crap in through the backdoor. It's Kafkaesque.
VS Code plugins extend the functionality of VS Code, not the apps you write with it.
Many of them are excellent, and it doesn't make sense to eschew their use just because you can't realistically audit their code.
And I guarantee you're also using a huge amount of code you don't and can't audit.
Unpopular opinion: I disable auto-update and for every piece of software I use, I go over it at least casually, or wait some days depending on how popular it is and the "trust level" in my mind, before first load and before update.
One option would be to not run vscode locally:
https://github.com/features/codespaces
Some of the easiest things you can do regardless of OS are to use containers when developing (so you aren't installing npm packages on your host/main system), and don't store sensitive credentials on your disk in plain text (like in `~/.aws/credentials`). Use env vars and only export them when needed.
Yeah, I started to worried about this but then realized that virtually all of my development work happens in containers anyways for purely practical reasons, and it's not like I'm mapping my real home directory in or anything, so if I had a compromised module or something it would affect that project but nothing else - all without me having to take any special security measures.
Run everything inside a VM. Next level would be develop in live USB, qubes OS or bare metal for every project (screw general purpose computer)
At the router level you can block Tor, block VPS IP ranges etc. You can also block the entire internet and only allow ips from your browser history
Besides sandboxing you can run a firewall, I tested with some reverse shells and it does stop them. Of course a red team can do more bad stuff
There's nothing you can do. All language library distribution systems ever made are insecure, and backed by willful ignorance.
Over a decade ago we were laughing at poor corporations getting pwned because they ran IE and ActiveX and websites could do anything, same for Outlook or Word.
And here we are in 2021 and your dev tools are doing the same thing. Good luck with that.
It’s easy, set up a good firewall on your router, disable constant internet access on a computer with an airgap, don’t update for the sake of updating, remove the LAN cable/WiFi card if you’re really worried.
Have an air gap between development machines and anything attached to the internet.
Or just put your machine in a bank vault.
And what if you are developing something that interacts with the Internet?
Use Debian and firejail. Avoid docker, snap and flatpak.
Use Windows Sandboxing which is builtin or have the dev run a VM which only exposes needed ports. Dont make things to complivated…
develop in a VM.
Is there a good solution for getting good graphics performance with VMs, aside from using QubesOS and its special shared-memory thing[1]?
Plain X11 is obviously out of the running, as that gives clients far too much access. Apparently[2] Xpra or xephyr can help here, but to what extent and how does it affect performance?
Wayland seems to use both sockets and shared memory somehow. Can that work with a VM or container? How? Is there another solution? (Pipewire?)
VNC works but doesn't perform well. SPICE might be slightly better?
[1]
https://www.qubes-os.org/faq/#whats-so-special-about-qubes-g...
[2]
https://firejail.wordpress.com/documentation-2/x11-guide/
I'm currently looking into NICE DCV[1] from AWS after learning about it in a podcast with Michelle Brenner from Netflix.[2]
My use case is on-prem, but it seems like this solution can run cloud agnostic.
[1]
[2]
https://www.infoq.com/podcasts/netflix-builds-workstations/
what's considered good graphics performance? I've never had an issue with plain old hardware accel/software rendering when just doing standard desktop tasks. It's generally fast enough.
If you're serious, then you want to search for GPU passthrough. This allows a VM to take control of an entire GPU. But you'll need a 2nd GPU for your main, bare metal OS.
x2go is fast, not sure how secure
Unless you use a different VM for everything, you've just moved the problem from 'secure your workstation' to 'secure your VM'.
if the thread was npm malware, having your dev instance inside the VM would reduce the fall out. Your personal & prod creds would be outside the VM.
It still goes a long way to reduce fallout with minimal effort & discipline.
You’re right it’s not perfect security, VM still needs securing – but it’s a good ROI
As long as the container isn't leaky, the benefit there is that it makes recovery, back-ups, and isolation easier. For example, dev VM isn't where you would have your creds for bank, SSH keys, etc. So isolation of (more) sensitive, daily-use data is easier. Snapshots of VM are cleaner and fast to rollback, etc. The OP was looking for reasonable mitigations that don't make daily usage a complete drag. I think VMs or other containers fit some of that bill.
But then you still have to draw a line somewhere for what's trusted outside of it - browser & password manager for your bank example, and for SSH keys probably at least _some_ project related tooling that you have to decide to trust or audit or whatever.
If you use both host & VM I think my point stands, the host may as well be another VM (or the VM another physical machine) it's not that it's virtual that's adding anything, it's the separation, which was my point.
> Unless you use a different VM for everything
Why yes, that's exactly the idea.
It's an idea, I didn't think it sounded like the commenter I replied to's idea.
it is trivially easy
https://edition.cnn.com/2013/01/17/business/us-outsource-job...