💾 Archived View for clemat.is › saccophore › library › ezines › textfiles › ezines › GONULLYOURSELF … captured on 2022-01-08 at 15:53:14.
View Raw
More Information
⬅️ Previous capture (2021-12-03)
-=-=-=-=-=-=-
-+syhdddho.
.+yhyssssssshm/
```.-shsssoooosssssyN-
:+oossssssssssdhsso/:::::/ssssshdo:
.so/::::::::::/hyso/:------:sssssym+m.
/y/:::::::::::+hss/:--------:sssssyd:yy
/yo-.ys::::::::::::/hso/---------:ossssshs:om-
Go Null Yourself E-Zine `mysyhh/::::::::::::hss:---------:ossssssd/:+hh
:mo:::+sys+/:::/:::yyso--------:+sssssssyh:/+sN:
Issue #5 - Summer/July 2011 +dso+/:+ssyyysoo:::dsss:----:/+sssssssssd+:++sdh
/mssssosssssyho:::ohssso//++osssssssssshy::+osyN-
www.GoNullYourself.org .myyyssss++sh/::::+hsssssssssssssssssshy/:/osssms
+msssssyyyy/::::::dsssssssssssossssyhy//+ssssshN`
yhso+++++++/////+/yyssssssss+:::syhhy+osssssssyM-
-+h:os:-sdo.-os:../hyssssss+:/+sysosddyssssssymM/
:y+-:ssyNN+dMMh.sNNyohyysysyhmh:-yNMdsssssydd+yo
"Sometimes I'm scared to ` `` :dMMMMNdMMMNhNMNNdoNMMm/dMMhsssssdmo. /y
think of what goes on in .yNMMMMMMMMMMMMMMMMMMMMMNhssssymh- `d
that insane head of yours..." :y:+NMMMMMMMMNMMMMMMMMMMNysssshd+` d
+ddd::NMMMMMNdhMMMMMMMMMNysssyds. y`
:mhdhd.:NMMNddhhNMMMMMMMNysssydh o-
`+` `ddy.hhh`/NmdhNNdNMMMMMMNyssshy.d` `h:
.hMo-. smo` :ddsohhmMMMdmMMMMMNyssyh/`os` oh`
.mMMNNm- -ms shdhmmMMMMmmMMMMMhssyh- /+ +s`
,yNNNNNNNNo ,mMMMMMMMMd, .sNMMNo` yh` .mddo/MMMMmmMMMMhsshs` -- ::
-Mm oMd `NM: .+dy- `m. yy. mMMMmmMMMhssd/ `-`
:Mm -++++mM oMN mM: `/dd+. ./ ` yMMMmmMMdssd: ` -s`
.NMmmmmmmMM 'MMmmmmmNMN' :hNh+. /MMMdmMdsyh. :ysNMy
-dh dd. -hMMdo- -/.`/-.MMMdNmsyy` -dMMMM
:Mm MM. :hMMMmy` :ysy-o+-s.MMMdNyyy` ./Nho-
dNNNNNNNNN, MM. yM :Mm MM. :hMmh+-` `/ys+////ssMddyy` `.:+shyo-`
NM: :My MM. yM :Mm MM. .``+ss/. `:yo:/ossNmyd` `/osdNNms:`
MM: :MN MM. yM :Mm MM. `/oos/.`:so:/syym:.-. `.-:oNMNho-`
MM: :MM mMNmmmmmMM :Mm MM. `-oooo++ss+oooo+s+. `-:+oso/-`.:`
-oosoo+++oysyh/++oo++-``
.h+ sh :hdddddddh/ dd` :ds oddddddddy. `/ssyhy+syysoh/.`
-Mm+++++++oMM mMs:::::oMm MM. /Mh MM::::::hMh -yhyo/yssdo+s:
/sssyMMssso- mM/ oMM MM. /Mh MM :+/ `odyyyyydssyo+h/
.MM NMdyyyyydMN MMdyyyyymMh MM `/hysssssyd++//+d.
`o+ `+ooooooo+` .+oooooooo: oo -yhssssssydy+++/ss
`ohhssssssyyms+o/oo`
hM: `/hysoooooosssdshoo-
,ddddddd-d ,yddddddddo dM/ ,ddddddddd` -ydyssssssssyyydh+.
Mm+````` yMh`````yMM mM/ Mh```````` .odhssssssssssyyydo
'hhhhhhdM, yMh hhhhhh+ dMo MMNNNNNNNN. `/dhsooooooooosssssyh
,,,,,,,,MM sMN,,,,,,,, mMo My```````` .ydyssssssssssssssssym.
.oooooooo+: `/ooooooooo /o- My .hNmmmmho++ooosyyhdhyydo
My `yMMms-` `.hMMNhh`
+: `yMmo. `yMMy`
.-hMy. `sMN.
dMMh: sMs
/MN+ `dN+`
hm- /MM/
.m. +Ms
`.-/y+ ym
-:+syhhhh/ .N/`
`sdmmy/`
0x01 Introduction || 0x07 Hacking 15A Announcements Shadytel, Inc
0x02 Feedback + Edits || 0x08 Gawker Passwords Analysis SinThet
0x03 Public-Key Encryption and RSA diminov || 0x09 360-928-00xx Scan Shadytel, Inc
0x04 Iridium Satellite Network Shadytel, Inc || 0x0a Terminal Servicez br0 storm
0x05 An Introduction to x86 NASM storm || 0x0b ProjectMF - An Overview df99
0x06 Art of Crypto: Tips and Tricks duper || 0x0c Et Cetera, Etc. teh crew
[==================================================================================================]
-=[ 0x01 Introduction
Ahoy there, and welcome to the 5th issue of GNY Zine, your one-stop shop for hax, cats, and slacks.
Just kidding on the last one there - we don't actually wear pants.
It's been a busy last couple of months in the hacking scene, mostly due to the various escapades of
the attention-whoring group LulzSec, followed closely by the infamous collective of perpetually
bored, angsty teenagers known only as Anonymous. Lulz were had, as always with these types of
things, but like with everything else on the Internet, time passes and all is forgotten. Another
passing phase.
Moving on to more important matters, we're proud to announce that this issue marks the one year
anniversary of GNY Zine. We'd like to thank all of our contributing authors, for these are the
people who make the zine what it is. Every aspect of GNY Zine is 100% volunteer work, and we
greatly appreciate all the effort that our authors put forth to keep a steady supply of content
available for our scheduled releases.
We'd also like to thank our readers, who give the GNY team reason to publish. As long as the hacker
spirit lives on, we will continue providing the scene with informative, educational material as
often as we can. If you're a reader and wishing to help, please consider becoming an author!
Submitting content is the most helpful thing a person can do! Further information about becoming an
author is located at the end of this article.
Now, for a few announcements...
We are excited to report that OrderZero, the author of "Story of a Raid" from issue #1 and a close
friend to the GNY community, has officially had all charges against him dropped. OrderZero was
raided by the FBI in June 2010 in connection to a leak of confidential information from the website
Lockerz.com, invoking Title 18, Section 1030 (Fraud and related activity in connection with
computers). He was later contacted and told that the charges were being dropped due to his status
as a minor, and all of his equipment and books were returned. We wish OrderZero the best of luck in
the future.
We are also proud to announce that Shadytel, Inc, the monopolistic telecom conglomerate responsible
for innovations such as offering reduced comfort noise as a tariffed service and billing plans
starting at 7 cents per DTMF, is unveiling its latest innovation:
The Lean, Mean, LIGATT Machine
206-312-6033
The (LM)^2 is a crafty Asterisk script that generates random babble in the voice of Gregory D. Evans
by stitching together samples of the random babble of Gregory D. Evans. The finest of quotes were
sampled from the recorded phone interview with LIGATT (GNY Zine, Issue #4), and with a little
magic, our shady, phreaky friends have ensured the endless supply of LIGATT comedy gold for years to
come. If you'd like in on the eh oh els, the Lean, Mean, LIGATT Machine is reachable through the
phone number listed above.
Now, enough babble of our own. Let the zine begin.
Notable Events
==============
April 26, 2011 - Sony PSN is compromised and taken offline, beginning a long string of attacks
May 5, 2011 - LulzSec begins its attention-whoring campaign
May 21, 2011 - Lockheed Martin suffers a network intrusion linked to the RSA Security hack
June 22, 2011 - Ryan Cleary, loosely linked to Anonymous/LulzSec, is charged by UK authorities
June 25, 2011 - LulzSec ends its attention-whoring campaign
July 1, 2011 - GNY Zine turns 1 year old (woohoo!)
July 11, 2011 - Booz Allen Hamilton suffers an intrusion on one of its dev servers
-=-=-
Now, on to formalities...
If you are interested in submitting content for future issues of GNY Zine, we would be happy to
review it for publication. Content may take many forms, whether it be a paper, review, scan, or
first-hand account of an event. Submissions of ASCII cover art that display the GNY logo in some
way are also appreciated. Well-received topics include computer hacking and exploitation methods,
programming, telephone phreaking (both analog and digital), system and network exploration, hardware
hacking, reverse engineering, amateur radio, cryptography and steganography, and social engineering.
We are also receptive to content relating to concrete subjects such as science and mathematics,
along with more abstract subjects such as psychology and culture. Both technical and non-technical
material is accepted.
Submissions of content, suggestions for and criticisms of the zine, and death threats may be sent
via:
- IRC private message (storm, m0nkee, or Barney- @ irc.gonullyourself.org #gny)
- Reddit (stormehh @ reddit.com/r/gny)
- Email (zine@gonullyourself.org)
If there is enough feedback, we will publish some of the messages in future issues. Our PGP key is
available for use below.
We have devoted a lot of effort into this publication and hope that you learn something from reading
it. Abiding by our beliefs, any information within this e-zine may be freely re-distributed,
utilized, and referenced elsewhere, but we do ask that you keep the articles fully intact (unless
citing certain passages) and give credit to the original authors when and where necessary.
Go Null Yourself, its staff members, and the authors of GNY Zine are not responsible for any harm or
damage that may result from the information presented within this publication. Although people will
be people and act in idiotic fashions, we do not condone, promote, or participate in illegal
behavior in any way.
-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v1.4.11 (GNU/Linux)
mQENBEzNnTIBCADCuSQtPeshJqqYd8KHfNoQ7ru3mWfwL3dc3MAgH1QYL1m1DSGs
3rAeWqyN2Jv1LVz2qLFXsqCdQhEW2wZg2tPPgoGiKAXbWE2itIoPSa/M1jrms6ai
vwq2ySiWPi2F77Rlyuwqs2Acoj+AGm1JINejx7DcK8RLWDViw+f8DMHmDZI4SS+s
fE7kVKh0/mLE7TGBXL7rCNA2bOPEHah0nQw2X18v3UNMV6R31FWVAZgSuL/RI+sV
LOuKDANYuj36KxFlx2pDUwHDUcB+BMqxzmdosC98xu80fKuNVEsLz3HpUXTfdSLJ
6F4gyKs1n2q7f6JcsdfoZ4nmj0IATnTK9tvfABEBAAG0HnN0b3JtIDxoaXhtb3N0
b3JtQGhvdG1haWwuY29tPokBPgQTAQIAKAUCTM2dhwIbIwUJCWYBgAYLCQgHAwIG
FQgCCQoLBBYCAwECHgECF4AACgkQ6oWhb3tw/4DtYgf9Ga/2HD5gP84qTZkh7aOx
PZQJJ3wJpZmQGw8kSvJLhtfBsvJJd8PuPay8aBmkVT+S+p0qUYjxc/BTD57t9O4+
Yh8DRk4gK+L9gvqR/RE/GxMEO+cyMXl0Nl8bTkV/qCygoctbTLPPJF37ZEFF0dp1
1kWUSdTkJ7++gs7b0+YCX65oyyg8OpHVSmw9KUU90aHyfeu7MdgGrEGR+FNDn9uK
m9WamrOp82UKmb8wytXfnbG7z2XvgRynxazl7I4ErExtr6pbyPJCryrIGmlG/qzT
cabX6tHtRnVSgrB+BVWu+XpHRi1lns8QxXYvV4SBAZDEBDq6f1qMpHFxyzq7MNSP
t7Qfc3Rvcm0gPHppbmVAZ29udWxseW91cnNlbGYub3JnPokBPgQTAQIAKAUCTM2d
fAIbIwUJCWYBgAYLCQgHAwIGFQgCCQoLBBYCAwECHgECF4AACgkQ6oWhb3tw/4CW
Dgf/dr7c6POPiMPrf30J39UrlvaS3BFo66WgEY3wa24brtv24Y19Ehk8fmP78uS/
tkfdg+6Pu280ILechVjofDqjDHSyVSy+CSVp1TJpgYvPbIcEa4JQoscUEe4lGJGg
1akXKu4RX1/o5wQrC/Tokm0NySxSPZfPhOnR5Bu1C6zvhneLVKpgLflfsCvlokxN
bo3TIAsfgqodkYR5CdyWGUYYQ9c4nbz0F6cSI2+k/mWFDljv4UQECl3MUcU2fNiC
a+1FAT6wmohVylYyyaA6YPVoe/9g5mKWQZyUq++bduLvV1qotpk7uJpKe3tgMJTn
/3tYZbhywejqTRRauGBSGv7QcrQgc3Rvcm0gPHN0b3JtQGdvbnVsbHlvdXJzZWxm
Lm9yZz6JAUEEEwECACsCGyMFCQlmAYAGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheA
BQJMzZ2KAhkBAAoJEOqFoW97cP+AS24IALcjJUygQnHg2kdIuGCErQP511aqxwFO
CC5MEXRG+Mg7GLrtc6wy+D89ifWQldUR0UwK/S7MMQC2OhOJtdvjai7k8LfmeG1G
iJZ6XYY7WEzaQWiVPso1P5SVo41OT38EXL6t2Ic3yGVGKJ9Vpo25SEmEoC9EL2Xa
Blze0Z/6x5JUbK0yCY37vu2mYGLFpg7lCKQL24vg13OjNOMzeJFQssPCOeSCHkJv
L+u5E9ohdUmHwWXAJVUieIu/S6sFDH0GrxNp8/YLhA4I/APpSjBZ6tofkrXNyajQ
9xjPT3KhuMErxRG+8a8iHhUH2VRibSdjwgJUxeg3DMqDQtxNFaRaFbqJAT4EEwEC
ACgFAkzNnTICGyMFCQlmAYAGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEOqF
oW97cP+AMmcH/jrXI3Y+WVkC3XgaRC+CnInMNJSLnMpoX2hkKfJsIMiiH19O41+O
W0U7bE0gvRjlDpQYEKlSnNz4a+bGmmceAmy6Rr11QsOuhtZG3/AfkhFEQ4f3U3zt
3miZILzcFc6vVXhXoq9stC6hoCzDPBu34s0OusHwxuVxX1eqCBSJYyrqSTlbxUKv
SYFfC/MzU6Q+iSZgiPNTYdgKIN3JKqZ2726i5IJOu6xIKNQByU4nEgV+Z4YjH7YD
MT9c6uSgqTACVM5h+3GW78G4Wl1E0lOXvimM/AEXHQSkZi34yq+JbOFspbyBhBz7
wRCIig4YSFDSwzPDdIx14NQlEq3+/tR9zx+5AQ0ETM2dMgEIALxlzgUfJ4leMnFF
gURwNGM5x9aTquU548xI4ESCeaDMkj6nHhrV4NAliBq28i48UjgI7IdE3pKYfQXi
aJZzQf4I+JULQkVzxF4uOjShhfXmhtABvBn+7du8qPqt5PwIFdb7ffmvXWFIX/in
+4QlDnlrz7xMQJBrBE9S4BJzR5IgWxpb7xA1yUWEJ+5vME3R+JhJuozmmmuMBHR1
s8pk8oEVrdmqdHeG5YZLsMyR5Kh6qJbPcj96CS9CtQU3HiEW0nwv8c3tNPY/4rNf
CAkeOWLAOvAq0Ybd82cIQr7Q0wVFo132H0Xs3Gw4MTiyvcd/BrGHeyjoBJfMhLCF
elFSEn0AEQEAAYkBJQQYAQIADwUCTM2dMgIbDAUJCWYBgAAKCRDqhaFve3D/gBq2
CACpH3rPcPb4HswNplVUMift+b5dV2ETYuNFXMK8yblFXa9URA6vdUzqrF9XSc6+
Tz9v/PVWY6FKKpnH06cbZQS07FWuY+zopsipuPgTaFLQyLlG2M+OoQOyEUYUpBW+
wTJ2Jd4hPiTlaoCLg2niA0RyzxzbnelrTtDtFtMoqJJlLWdtFoITW8/OLASHA7vu
bvRlfW89nueq9/4vEbxnvlUa7cOPtcZcGfHneHWV4JI9e5NJ6Agxp1gOkouF9/jn
YneawjaEgI6QOS06yyTXOu/XCo6L+f4/wd+1EMzt+NjsUXSraeNw+tdjZEZ8Uo9/
8QJQ4gF00KrsCCSrPyg/cZ5G
=g7oJ
-----END PGP PUBLIC KEY BLOCK-----
[==================================================================================================]
-=[ 0x02 Feedback + Edits
We always strive to publish accurate information in GNY Zine, but we the authors and editors are in
fact human beings and are subject to making mistakes from time to time, despite our best efforts.
The publication, compilation, and distribution of this e-zine is derived entirely from our passion
for technology and curiosity of how things tick. GNY Zine has no commercial influences. If you
find that there is an error in content that we have published, please do not hesitate to email us so
that it may be announced and corrected in the next issue. Not acting like a stuck-up elitist about
it will probably invoke a more positive response too.
With that being said, we are also receptive to content or personal experiences relevant to
information presented in past issues. If you've written some code, applied a concept in a new way,
or just want to voice your opinion about a topic, send us an email!
We may be contacted at: zine@gonullyourself.org
(PGP key is available in the Introduction)
Please note that emails we like will be published in future issues, so specify if you wish for your
message to remain private or if you wish for us to redact certain personal information from it.
----------------------------------------------------------------------------------------------------
Hey man,
I'd like to congratulate you on having a zine and website that doesn't suck. Today's "hacker"
culture tends to be either about e-penis (I hacked dis site cuz I'm l33t) or money (Credit Cards
br0). Your zine seems in the vein of phrack, the spread of knowledge for intellect's sake rather
than for idiocy, I salute you for that, quality material is always getting harder to find.
Thanks, and keep up the good work!
>> Thanks for the kind words - that's exactly what we're shooting for with the zine, so it's great
>> to see that's how readers are receiving it.
----------------------------------------------------------------------------------------------------
Hi guys...
Nice zine...I just came across yours and I can say I fairly like it. Great work!
I just wanna report a small typo. In a section that talks about
rootkit devel, it is said:
finger @kernel.org
In my box (CentOS), the working command is:
finger -l @kernel.org
OK, that's all. I hope that's useful. Have a nice day! :)
--
regards,
Mulyadi Santosa
Freelance Linux trainer and consultant
blog: the-hydra.blogspot.com
training: mulyaditraining.blogspot.com
>> Thanks for the heads up. However, testing it out on my fc13 machine, `finger @kernel.org` seems
>> to be displaying the correct output:
>>
>> [storm@Dysthymia ~]$ finger @kernel.org
>> The latest linux-next version of the Linux kernel is: next-20110418
>> The latest snapshot 2.6 version of the Linux kernel is: 2.6.39-rc3-git9
>> The latest mainline 2.6 version of the Linux kernel is: 2.6.39-rc3
>> The latest stable 2.6.38 version of the Linux kernel is: 2.6.38.3
>> The latest stable 2.6.37 version of the Linux kernel is: 2.6.37.6
>> The latest stable 2.6.36 version of the Linux kernel is: 2.6.36.4
>> The latest longterm 2.6.35 version of the Linux kernel is: 2.6.35.12
>> The latest stable 2.6.35 version of the Linux kernel is: 2.6.35.9
>> The latest longterm 2.6.34 version of the Linux kernel is: 2.6.34.9
>> The latest longterm 2.6.33 version of the Linux kernel is: 2.6.33.11
>> The latest longterm 2.6.32 version of the Linux kernel is: 2.6.32.38
>> The latest stable 2.6.32 version of the Linux kernel is: 2.6.32.28
>> The latest longterm 2.6.27 version of the Linux kernel is: 2.6.27.58
>> The latest stable 2.6.27 version of the Linux kernel is: 2.6.27.57
>> The latest stable 2.4.37 version of the Linux kernel is: 2.4.37.11
>>
>> Running it on CentOS seems to be fine too:
>>
>> [storm@localhost ~]$ cat /etc/issue
>> CentOS release 5.6 (Final)
>> Kernel \r on an \m
>>
>> [storm@localhost ~]$ finger @kernel.org
>> The latest linux-next version of the Linux kernel is: next-20110707
>> The latest snapshot 3 version of the Linux kernel is: 3.0-rc7-git1
>> The latest mainline 3 version of the Linux kernel is: 3.0-rc7
>> The latest stable 2.6.39 version of the Linux kernel is: 2.6.39.3
>> The latest stable 2.6.38 version of the Linux kernel is: 2.6.38.8
>> The latest stable 2.6.37 version of the Linux kernel is: 2.6.37.6
>> The latest stable 2.6.36 version of the Linux kernel is: 2.6.36.4
>> The latest longterm 2.6.35 version of the Linux kernel is: 2.6.35.13
>> The latest longterm 2.6.34 version of the Linux kernel is: 2.6.34.10
>> The latest longterm 2.6.33 version of the Linux kernel is: 2.6.33.16
>> The latest longterm 2.6.32 version of the Linux kernel is: 2.6.32.43
>> The latest longterm 2.6.27 version of the Linux kernel is: 2.6.27.59
>>
>> The finger(1) manpage reports:
>>
>> If no options are specified, finger defaults to the -l style output if
>> operands are provided, otherwise to the -s style. Note that some fields
>> may be missing, in either format, if information is not available for
>> them.
>>
>> It'd be interesting to see what difference on your system is causing finger to run without the -l
>> flag as default.
>>
>> Anyways, thanks again, and glad that you enjoy the zine.
[==================================================================================================]
-=[ 0x03 Public-Key Encryption and RSA
-=[ Author: dimonov
What is encryption?
~~~~~~~~~~~~~~~~~~~
Encryption is a procedure which consists of an algorithm, and an
encryption key. The typical method is to encipher a message with a key and
an algorithm, to get the encrypted form, called ciphertext.
Private-key encryption uses the same key for both encryption and
decryption.
Public-key encryption uses a different key for encryption and
decryption. RSA is a public-key encryption algorithm.
Public-key cryptography
~~~~~~~~~~~~~~~~~~~~~~~
With public-key cryptography:
1) The encryption algorithm is generally E(D(M)) = M
2) The decryption algorithm is generally D(E(M)) = M
where M is the message, E(M) and D(M) is the ciphertext, with
the encryption procedures being D and E on M.
The encryption as well as the decryption in [1] and [2] are one-way
functions. This means that even though D may be revealed in [1], it does
not reveal an easy way to compute E, nor does it allow decryption of the
cyphertext D(M).
Why public-key cryptography?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
With private-key cryptography, if a person named Bob wanted to send an
enciphered message to Alice, he would need to give Alice a copy of the
encryption key to decrypt the message. The problem with this scenario
is that the keys need to be distributed over a secure communication
channel. This is called the "key distribution problem". Before a
private communication can happen, there has to be a secure communication
channel already in place. If the key distribution were to take place over
an insecure communication channel, an intruder listening on the channel
could decipher the ciphertext after receiving the encryption key.
Public-key encryption "solves" this problem, because it does not require
any private couriers; it's keys can be distributed over an insecure
communications channel.
Bob sending a private message to Alice
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In the public-key encryption process, if Bob wanted to send a private
message to Alice, he would take these steps:
(Encryption and decryption procedures are referred to with subscripts
Ea, Da, Eb, Db.)
1) Bob retrieves Ea from a public key database.
2) He then sends her an enciphered message, Ea(M).
3) Alice deciphers the message using the algorithm Da(Ea(M)) = M.
She can only decipher Ea(M) with Da. A response would need to
be enciphered with Eb, which is also available in the
public-key database.
An intruder listening on the communication channel won't be able to
decipher the ciphertext, since it isn't possible to derive the encryption
keys from the decryption keys. The author assumes that the intruder
cannot insert / modify messages in the channel.
Bootstrapping using public-key encryption
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Public-key encryption can be used as a "bootstrap" to create a secure
communication channel, over which another encryption key exchange can
take place (one which depends on a private communication channel).
Once a secure channel is created, the first message can consist of the
encryption key to decipher further messages.
Signing
~~~~~~~
Signing a message proves that a message wasn't forged; that it was
created by the person who holds the private-key.
Bob sending Alice a signed message
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If Bob wanted to send Alice a signed message, they would take these
steps:
1) Bob computes his signature for the message M using Db.
S = Db(M). Since each message in public-key encryption is the
ciphertext for another message, this is valid.
2) He then encrypts S using Ea, and sends it to Alice.
Alice receives Ea(Db(M)), or Ea(S).
3) Alice decrypts the ciphertext with Da to obtain S.
Da(Ea(S)) = S. Alice now knows that the sender is Bob, by
looking at the signature.
4) Alice extracts the message with the sender's encryption
procedure, available in a public-key database.
Eb(S) = M. Alice now has a message-signature pair of (M, S)
from Bob.
Bob cannot later deny the fact that he sent the message, since nobody
else could have created the signature S = Db(M). If Alice decides to
go to court, she would only need to show a judge the message-signature
pair (M, S), to prove that it was created by Bob. Alice cannot modify
M, since she would need to generate a corresponding signature,
S' = Db(M').
Rivest, Shamir, and Adleman's method
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Using RSA, a public-key encryption algorithm, a message M is encrypted
with an encryption key (e, n). e and n are a pair of positive
integers. The algorithm is as follows:
1) The message M is broken into a series of blocks. Each block
is represented as an integer between 0 and n-1.
2) The message is then raised to the e'th power modulo n. That
is, the resulting ciphertext is the remainder when M^e is divided
by n. C ≡ E(M) ≡ M^e (mod n).
3) Decrypting the ciphertext is done by raising it to the
power d modulo n. D(C) ≡ C^d (mod n).
The encryption key (e, n) and the decryption key (d, n) are a pair of
positive integers. Each user makes his encryption key public, and his
decryption key private.
Choosing encryption and decryption keys
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The algorithm to choose encryption keys is as follows:
1) n is computed as a product of two very large random prime
numbers p and q. Although n will become public, p and q will be
hidden from everyone else, because of the difficulty in factoring p
and q from n, if they are large enough. n = p * q.
2) A large random integer which is relatively prime to (p - 1) *
(q - 1) is chosen for d. That is, d is checked to make sure it
satisfies gcd(d, (p - 1) * (q - 1)) = 1.
Note: gcd = greatest common divisor.
3) The integer e is computed from p, q and d to be the
"multiplicative inverse" of d, modulo (p - 1) * (q - 1). The
formula used is e * d ≡ 1 (mod (p - 1) * (q - 1)).
Encrypting and decrypting efficiently
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To encrypt and decrypt with the RSA algorithm efficiently, a technique
called "exponentiation by repeated squaring and multiplication" is
used. In this implementation, enciphering and deciphering are similar,
making it possible to implement the algorithm in a few special-purpose
integrated chips. Using this procedure, M^e (mod n) can be computed in
2 * log e (base 2) multiplications and divisions. The steps to do this
are as follows:
1) Let Bk, B(k-1), ..., B(1), B(0) be the binary
representation of e.
2) Set C to 1.
3) Repeat steps 3a to 3b for i = k, k-1, ..., 0:
3a) Set C to the remainder of C^2 when divided by n.
3b) If Bi = 1, then C is set to the remainder of C *
M, when divided by n.
4) C is now the encrypted form (ciphertext) of M.
Finding large prime numbers
~~~~~~~~~~~~~~~~~~~~~~~~~~~
The prime numbers p and q have to be large enough, to make it
computationally infeasible or impossible for anyone to factor n = p
- q. This is worth noting, because it n will be in the public key
database, whilst p and q will stay secret. This is why RSA's authors
recommend at least 100-digit prime numbers for both p and q. This has
the effect on n, that it becomes a 200-digit number. An algorithm for
finding large prime numbers is included below.
1) Generate 100-digit random numbers, and test them for
primality. About (1n^100)/2 = 115 numbers will be tested,
according to the prime number theory.
2) Testing a large number b for primality is done by choosing
a random number 'a' from a uniform distribution {1, ..., b-1},
and testing whether gcd(a, b) = 1, and J(a, b) ≡
a^((b-1)/2) (mod b), where J(a, b) is the Jacobi symbol.
3) If this holds true for 100 randomly chosen values of a,
then b is almost certainly prime. There's a negligible chance
that b is composite, although even if a composite b were used
in RSA, the decryption wouldn't work correctly.
When b is odd, a <= b, and gcd(a, b) = 1, the Jacobi symbol J(a, b) has
a value in {-1, 1}, and can be efficiently computed using the
code:
J(a, b) = (a == 1)? 1 : (iseven(a)? J(a/2, b) * (-1)^((b^2-1)/8) :
J(b(mod a), a) * (-1)^((a-1)*(b-1)/4));
Another technique to finding large prime numbers, is taking a number
of known factorization, and incrementing it by 1, then testing the
result for primality. If a prime p is found, it can be proved that it
really is prime by factorizing p-1.
Computing d from ϕ(n)
~~~~~~~~~~~~~~~~~~~~~~~~
Any prime number greater than max(p, q) can be used as d, although
it's important to use a number from a large enough set, to prevent it
being found by a direct search.
A variation of Euclid's algorithm can be used for computing d from
ϕ(n):
1) Calculate gdc(ϕ(n), d), by computing a series X0, X1,
..., where X0 = ϕ(n), X1 = d, and X(i+1) ≡ X(i-1)
(mod Xi), until Xk is equal to 0.
2) gcd(X0, X1) = X(k-1). Compute for each Xi the numbers Ai
and Bi, such that Xi = Ai * X0 + Bi * X1. If X(k-1) = 1, then
B(k-1) is the multiplicative inverse of X1 (mod X0). Since k
will be less than 2 * log n (base 2), the computation is
rapid.
3) If e < log n (base 2), start again, by choosing a different
d value. This guarantees something called a "wrap-around"
(reduction modulo n) for every encrypted message except M = 0
or M = 1.
Security considerations
~~~~~~~~~~~~~~~~~~~~~~~
Since there aren't any known techniques to "prove" that an encryption
algorithm is secure, the only way to test it is to see if anyone can
break it. Whilst factoring numbers isn't difficult, no one has yet
found an algorithm to factor a 200-digit number within a reasonable
timeframe. The security of the RSA algorithm depends on the
factorization of large prime numbers being infeasible: if a more
efficient and faster factorization method is discovered, it would weaken
the algorithm's security. A word of note is that there is the presumption
of physical security to the private keys.
Factoring n
~~~~~~~~~~~
Factoring n would allow someone to break the RSA algorithm, since the
factors of n, which are p and q would allow the computation of
ϕ(n), and d. Factoring is much more difficult than determining
whether a number is prime or composite.
Computing ϕ(n)
~~~~~~~~~~~~~~~~~
Computing ϕ(n) would allow someone to break RSA, by using the
result to compute d as the multiplicative inverse of e modulo ϕ(n).
This approach, however, is no easier than factoring n. The method to
compute ϕ(n) is as follows:
1) (p + q) is obtained from n, and ϕ(n) = n - (p + q) +1.
2) (p - q) is the square root of (p + q)^2 - 4n.
3) q is half the difference of (p + q) and (p - q).
q = ((p + q) - (p - q))/2
Because ϕ(n) is trivial to compute if n is prime, n must be
composite.
Computing d
~~~~~~~~~~~
Once d is computed, n could be factored easily; which is why computing
d is no easier than factoring n. If d is known, n could be factored as
follows:
e * d - 1 is calculated, which is a multiple of ϕ(n).
[n can be factored using any multiple of ϕ(n), according to Miller
and Rieman's hypothesis and tests for primality.]
References
----------
A Method for Obtaining Digital Signatures and Public-Key Cryptosystems
- by RL Rivest, A Shamir, and L Adleman, MIT Laboratory for Computer
Science and Department of Mathematics [Communications of the ACM, 1978].
[==================================================================================================]
-=[ 0x04 Hacking the Iridium Satellite Network
-=[ Author: Shadytel, Inc
-=[ Website: http://www.shadytel.com
Hello again there kids - It's time for yet another slice of the Shadytel world. This issue, it'll be
double the Shadytel, double the fun!
This time, we're here to talk about a network you probably don't know about, but has carried voices
farther than the length of any voyage man has traveled, and lives above - even beyond us. This time,
Shadytel is breaking into space.
In 1998, Iridium was the next big thing. With 2400 bits of goodness, and one of the most expensive
vocoder licenses known to man (AMBE or Advanced Multi Band Excitation licenses are thought to range
anywhere from $100,000 to $1,000,000 US dollars), the company had something to prove - mostly that
people would pay dollars a minute to make a phone call.
In less than two years, they went bankrupt. Despite a strong backing from Motorola, the company was
only able to keep the network afloat so far. Plans were made to let all 66 satellites, and the
spares alike, burn up in the atmosphere or crash into the ocean. Fast forward to today. The Iridium
network as we know it still exists. Sort of.
The US Department of Defense, fearing the network they'd bought thousands of handsets for would stop
working, started pumping as much money as they could into the zombie of a company. In addition to
their stake of ownership in the company, the DoD has a gateway off a base in Hawaii. To this day,
the Department of Defense still makes up 23% of the company's revenue.
Moving onto the network itself, not much in the way of hardware has changed since 1998; nearly all
calls are processed via a PSTN gateway in Tempe, Arizona, though rumors suggest that a functional
gateway still exists in Avezzano, Italy. Beyond the Qwest 5ESS that links them to the outside world,
very strange things exist on the Iridium side, most notably a Siemens D900, a modified EWSD
typically used for GSM services working with a custom IVR to run the show from the ground. This
could possibly justify Iridium's explanation of their 66 low earth orbit satellites "functioning not
unlike extremely tall cellular towers."
So with the very, very notable DoD presence on a network used excessively by foreign embassies and
other strange organizations willing to pay a high price for a network possibly neutral from corrupt
nations, is this article meant to expose the company as a government front?
Hell no! We're here to help you own the crap out of it! From a numbering plan standpoint, Iridium is
very sporadic, occupying a large number of hundred blocks on Qwest's Tempe 5ESS, TEMPAZMCDS0.
Exceptions to this exist, though. In one range is an IVR simply known as the two-stage dialing
service. More accurately, since calls to Iridium's country code +881-6 are not only blocked on some
carriers, but hideously expensive (ranging anywhere from $1 to $5 per minute), Iridium made the
decision to let the called party pay said dollar per minute if they decide to opt in. While the
number (480-768-2500) is useful for scanning, it's also a little misleading. There are twenty
available numbers in that range, and two others assigned; 2505 goes to some unnamed calling card
platform, presumably for cheap access to the Iridium network, and 2510 goes through the satellite
gateway switch to a modem; what looks to be a Lucent Portmaster. Beyond these, most numbers will go
to ordinary Qwest subscribers.
This is the sort of environment you need to be accustomed to when dealing with Iridium. They'll
often take up roughly anywhere from twenty through fifty numbers in a hundred block, instead of
using the whole thing. Exceptions exist, though, particularly in exchanges occupied or near their
Avaya PBX. Speaking of which, aside from the Iridium test number, which we'll discuss in a bit, the
Iridium PBX is only good for two things:
- Steely Dan hold music
- Bugging NOC employees inhabiting the building 24/7
This would also be a good time to mention that the satellite master control center, according to
Iridium's 1997 website archive, is in northern Virginia, a territory where the company's current
incarnation has a corporate headquarter presence to this day.
Getting back to Tempe though, there are numbers that Iridium publishes for people to use, and given
the spastic nature of the way numbers are assigned, every little bit certainly helps. For example,
there's 480-752-5105. This is a free call for Iridium subscribers, but more importantly, it's a PBX
range owned by Iridium. Nearby in that exchange, 40xx, 41xx, and 42xx are all jammed full of numbers
pointing to the D900.
There's also 480-345-4340, the Iridium fax service.
And since we're a reputable corporation filled with shady deviants, we're releasing an Iridium range
just for you. 480-456-7000 through 8199 will largely go to the D900, giving all of you lazy phreaks
more than enough room to start.
Finally, once you find yourself needing to know which numbers are which on the Iridium network,
either by way of the two-stage dialing system, alarming amounts of toll-fraud, or voicemail numbers
announced on the dedicated DIDs, there is indeed structure to the way their exchanges are
provisioned. Here's a handy guide for just that:
8816-214 Commercial Accounts
8816-224 Commercial Accounts
8816-310 Test/Demo Accounts
8816-314 Commercial Accounts
8816-315 Prepaid Accounts
8816-316 Prepaid Accounts
8816-317 Colombia Ministry of Defense
8816-318 Crew Calling Card
8816-414 Commercial Accounts
8816-415 Prepaid Accounts
8816-514 Commercial Accounts
8816-629 contains smsc, etc
8816-762 DoD limited voice service
8816-763 DoD voice service
8816-766 DoD international voice service
There are other unconfirmed myths we have about the Iridium network, such as a partnership with
Sprint long distance, or the deep voiced male network announcements coming from the satellites
themselves, but that's where we pass the torch on. We bring you our unsolved mysteries, make them
the solved secrets that only you know. Go forth, shady readers, and happy dialing!
[==================================================================================================]
-=[ 0x05 An Introduction to Programming with x86 NASM
-=[ Author: storm
-=[ Email: storm@gonullyourself.org
-=[ Website: http://gonullyourself.org/
This article is meant to serve as a SIMPLE and INTRODUCTORY guide to writing x86 assembly code on
Linux using the NASM assembler. When I first began learning assembly, I realized that there weren't
many quality resources suited for a beginner to the language, and I found myself learning mostly
through word-of-mouth and referencing well-documented pieces of source code. I wished to write an
article that would make up for this, bringing the cryptic language down to a level that beginners
could understand. Please understand that the style of writing lends itself to potential for
definitions that are not 100% correct in every possible technical sense, but this is intentionally
done to promote understanding when one may not possess a full grasp of all the underlying concepts.
For this article, we will be working with the following "Hello World" example:
section .text
global _start
_start:
mov eax, 4
mov ebx, 1
mov ecx, hello
mov edx, hellosize
int 0x80
mov eax, 1
mov ebx, 0
int 0x80
section .data
hello db 'Hello world',0x0d,0x0a
hellosize equ $-hello
To get straight to the point, here's the quick and dirty way to compile a program with NASM:
[storm@Dysthymia ~]$ nasm -f elf hello.asm
[storm@Dysthymia ~]$ ld hello.o -o hello
[storm@Dysthymia ~]$ ./hello
Hello world
[storm@Dysthymia ~]$
It is important to note that when we write assembly code, we will be using the Intel syntax. The
two syntaxes primarily used on x86 are Intel and AT&T, of which the most noticeable difference
between the two is the order of operands (the arguments) in instructions.
The Intel syntax looks like:
mov dst, src
such that the following instruction:
mov eax, 100
stores the value 100 (source) into the eax register (destination).
The AT&T syntax is exactly the opposite:
mov src, dst
It also adds some syntactic sugar, distinguishing between immediate operands (hard-coded values) and
registers:
mov $100, %eax
For the length of this article, we will be using:
http://gonullyourself.org/main/shellcode/documentation/Linux%20x86%20System%20Calls%20Reference%20for%20kernel%202.6%20and%20higher/main.html
as our referenced documentation. Note that everything found in this documentation has its own
manpage, but it is agreeably cryptic and may pose intimidating to a beginning programmer. This
reference was shamelessly stolen from the LSCR project (http://sourceforge.net/projects/lscr/), so
you may download a local copy of it bundled in the latest tarball.
Open the system calls reference in your web browser and click on the Index view. Scroll down to
sys_write and select it.
On the page, we see:
eax 4
ebx Device descriptor.
ecx Pointer to the buffer containing the data to be written.
edx Number of bytes to be written.
These four arguments - eax, ebx, ecx, and edx - are called 'registers'. If you're not familiar with
registers, think of them as analogous to variables in high-level languages, like PHP or Python.
Only with assembly, these reside on the CPU itself. Let's consult Webopedia:
A, special, high-speed storage area within the CPU. All data must be
represented in a register before it can be processed. For example, if two
numbers are to be multiplied, both numbers must be in registers, and the
result is also placed in a register. (The register can contain the address
of a memory location where data is stored rather than the actual data
itself.)
The number of registers that a CPU has and the size of each (number of bits)
help determine the power and speed of a CPU. For example a 32-bit CPU is one
in which each register is 32 bits wide. Therefore, each CPU instruction can
manipulate 32 bits of data.
Usually, the movement of data in and out of registers is completely
transparent to users, and even to programmers. Only assembly language
programs can manipulate registers. In high-level languages, the compiler is
responsible for translating high-level operations into low-level operations
that access registers.
CPUs have a specific number of registers as well as specific names and purposes for each of them.
All of these change from architecture to architecture. For instance, on the x86 architecture,
16-bit systems have the four general purpose registers ax, bx, cx, and dx. On 32-bit systems, these
four registers were 'e'xtended into eax, ebx, ecx, and edx. 64-bit systems extended these four
registers even further, and they became rax, rbx, rcx, and rdx. For this article, we are working
with a 32-bit x86 system.
When writing assembly code, our goal is to manipulate the contents of registers in such a way to set
the stage for executing system calls. System calls (syscalls) basically act as an API to the kernel
to do the most basic of basic tasks. Each kernel (Windows, Linux, XNU, so on) provides different
syscalls, and this list usually expands with newer versions.
Similar to how the PHP language gives us print() and fwrite(), the Linux 2.6 and higher kernel
provides us with sys_write, which we use in our hello.asm program. You can read the official
manpage of sys_write at `man 2 write`. In case you didn't know, manpages are divided into sections,
and section 2 is devoted entirely to syscalls.
Now that we know the basics, let's step through our code. The first line of our hello.asm program:
mov eax, 4
What we are doing here is storing the value 4 to the eax register. By doing this, we are setting
ourselves up to tell the CPU, "Hey, when I tell you to, execute syscall #4." Each individual
syscall has a unique number, and by looking back at the documentation for sys_write, we see that
it's assigned the number 4. If we wanted to execute sys_uname instead, for instance, then we would
store the number 122 to eax.
Moving onto the second line, we see:
mov ebx, 1
In the documentation, we can see that ebx is used for:
ebx Device descriptor.
This is simply a cryptic way of asking the programmer "Where do you want to write the data to?" The
POSIX specification standardizes three device descriptors:
STDIN (Standard In) - input
STDOUT (Standard Out) - output
STDERR (Standard Error) - error
When you type on the command line to give input to a program, you are writing data to STDIN. When
a program prints data to the screen, it is writing to STDOUT. When a program prints an error
message, it is writing to STDERR. Each of these device descriptors is assigned a number:
0 - STDIN
1 - STDOUT
2 - STDERR
We store the value 1 to ebx, because we want sys_write to write to STDOUT, i.e., the terminal. If
you instead wanted to write to a file, then we would first use the sys_open syscall, which returns a
file descriptor (represented by a number) that we would then pass on to sys_write.
Predictably, we will want to set up ecx next:
ecx Pointer to the buffer containing the data to be written.
We do this with the next line in our code, where we store a pointer to the string "Hello world\r\n"
to ecx:
mov ecx, hello
To explain this operation we are doing, look down below, where you'll see the declaration of the
string "Hello world\r\n":
hello db 'Hello world',0x0d,0x0a
Looking at the NASM documentation, the NASM language provides the following pseudo-instructions to
declare data in a program:
db value ; Allocate a byte sized value
dw value ; Allocate a word sized value
dd value ; Allocate a dword sized value
Here we declare a string using the db pseudo-instruction (as it's not an actual instruction in the
assembly language, but a tool offered by NASM), which is stored to the .data section of memory
(designated for initialized variables). We assign this value to the name 'hello', which is not a
register, but another tool offered by NASM that allows us to work with the notion of variables in
writing our program. It should be noted that the actual string is not assigned to 'hello'; instead,
'hello' represents the location in memory where our given string is stored, called a pointer. This
pointer is passed on to ecx in our program. Instead of writing the string itself to the ecx
register (since registers are very small), we instead give it a pointer to the data we want to write.
To reiterate, a pointer is simply a memory address that "points" to the data residing at that
address. When we want to run our program, the "Hello world\r\n" string is copied into memory, and
the address of where these bytes are located would be the value of our pointer. Most programs
written in the C language work closely with the notion of pointers too. A buffer or function is
referenced by its name, and a pointer to the buffer or function is obtained by prefixing the name
with an ampersand (&). Here, we can see the pointer at work in our program:
[storm@Dysthymia ~]$ gdb hello
GNU gdb (GDB) Fedora (7.2-51.fc14)
Copyright (C) 2010 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law. Type "show copying"
and "show warranty" for details.
This GDB was configured as "i686-redhat-linux-gnu".
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>...
Reading symbols from /home/storm/hello...(no debugging symbols found)...done.
(gdb) info variables
All defined variables:
Non-debugging symbols:
0x080490a4 hello
(gdb) x/13xb 0x080490a4
0x80490a4 <hello>: 0x48 0x65 0x6c 0x6c 0x6f 0x20 0x77 0x6f
0x80490ac <hello+8>: 0x72 0x6c 0x64 0x0d 0x0a
(gdb)
As you can see, this is exactly the value that 'hello' is replaced with (look at offset +10).
(gdb) disassemble _start
Dump of assembler code for function _start:
0x08048080 <+0>: mov $0x4,%eax
0x08048085 <+5>: mov $0x1,%ebx
0x0804808a <+10>: mov $0x80490a4,%ecx
0x0804808f <+15>: mov $0xd,%edx
0x08048094 <+20>: int $0x80
0x08048096 <+22>: mov $0x1,%eax
0x0804809b <+27>: mov $0x0,%ebx
0x080480a0 <+32>: int $0x80
End of assembler dump.
(gdb)
Our fourth line of the code:
mov edx, hellosize
By looking at the documentation, we see that edx is associated with:
edx Number of bytes to be written.
If you look down below to the bottom of the code, we see demonstrated some special syntax that grabs
the size of our 'hello' string and saves it to 'hellosize', analogous to strlen() in C. Instead of
storing the literal number 13 to edx (11 bytes for the text, 2 bytes for the carriage return and
newline), we just say "whatever the size of 'hello' is". By doing this, it abstracts the process of
determining the length of our string, which is useful should we change the string being written.
For example, if we change 'hello' to instead be "Hello there world, how are you?\r\n", then the
value stored to edx will automatically change to 33. With the original "Hello world\r\n" string in
mind, we can give edx a value of 5 and it will only print "Hello". If we give edx a value of 11,
then it will only print "Hello world" with no trailing whitespace.
The next line in our asm code:
int 0x80
This is called a kernel interrupt and is basically our program's way of notifying the kernel that
everything is set and we're ready to run the syscall. At this point, the value of eax will be read
and recognized to hold a value of 4, prompting the kernel to run sys_write. The remaining registers
are read and passed as arguments to the kernel function.
If you'd like a first-hand look at what's happening under the hood, then take a look at the Linux
kernel source code, itself. As of the writing of this article, we are looking at the latest stable
release of the kernel, 2.6.39.3. The sys_write function resides in fs/read_write.c :
SYSCALL_DEFINE3(write, unsigned int, fd, const char __user *, buf,
size_t, count)
{
struct file *file;
ssize_t ret = -EBADF;
int fput_needed;
file = fget_light(fd, &fput_needed);
if (file) {
loff_t pos = file_pos_read(file);
ret = vfs_write(file, buf, count, &pos);
file_pos_write(file, pos);
fput_light(file, fput_needed);
}
return ret;
}
Each register we set matches up exactly to each argument passed to the function: an (unsigned)
integer to the file descriptor we're writing to, a pointer to the buffer of data we're reading from,
and a count of how many bytes to write.
The kernel will execute the syscall and print out "Hello world\r\n" to STDOUT.
Looking further down the example code, there is one more interrupt we execute before the program is
finished. Corresponding to an eax value of 1 is the sys_exit syscall, which is used to cleanly
terminate the current process. The ebx register holds an integer that represents the return value
of the process. It is mostly standardized that a return value of 0 means "no error," while a return
value of anything but 0 means an error of some sort occurred. Concerning errors in processes, the
integer returned is matched to a specific error code by consulting the program's documentation.
This is different than error reporting in C, where the return value upon error is usually -1, and
the integer representing the error code is stored to the 'errno' buffer.
Expectedly, our simple program has encountered no errors, so we mov the literal value of 0 to ebx
and execute the syscall, effectively ending the program.
As outlined at the beginning of the article, we now compile our NASM program like so:
[storm@Dysthymia ~]$ nasm -f elf hello.asm
[storm@Dysthymia ~]$ ld hello.o -o hello
[storm@Dysthymia ~]$ ./hello
Hello world
[storm@Dysthymia ~]$
If you're of the curious type, you may wish to start analyzing other binaries and see which system
calls they execute. This can be done using the `strace` command:
[storm@Dysthymia ~]$ strace ./hello
execve("./hello", ["./hello"], [/* 62 vars */]) = 0
write(1, "Hello world\r\n", 13Hello world
) = 13
_exit(0) = ?
[storm@Dysthymia ~]$
It may be interesting to observe the complex execution path that's followed even when a simple
program like `echo` is run without any arguments.
Hopefully after reading this, you have gained a fundamental understanding of the assembly language
and other basic, universal OS concepts. In future issues, we'll take it one step further and use
our knowledge to reverse engineer programs, and build exploit payloads, better known as shellcode.
[==================================================================================================]
-=[ 0x06 The Art of Crypto: Tips and Tricks
-=[ Author: duper
-=[ Website: http://projects.ext.haxnet.org/~super/
.______________________________________,
| |
| The Art of Crypto: Tips and Tricks |
:______________________________________:
| |
| |
| [=%=%=%=%=%=%=%=%=%=%=%=%=%=%=%=%=%] |
| { } |
| [ another fine article brought to ] |
| [ you by duper of HaxNet #projects ] |
| { } |
| [%=%=%=%=%=%=%=%=%=%=%=%=%=%=%=%=%=] |
; !
`=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-'
Before I begin, I'd like to make it absolutely clear that I am by no means: a professional code
breaker, an expert cryptographer, or a math genius. Therefore, this article does not aim to cover
low-level implementation details, such as the looking tables and their corresponding S-boxes
(Substitution boxen). Furthermore, the theory behind some advanced attacks, such as collisions in
one-way hash functions as in the case of MD5, will be touched upon, but the finer mathematical and
technical idiosyncrasies are irrelevant in what follows. Similarly, the following text will not be
dedicated to subtleties like easily constructed subliminal channels, such as the well-known single
character modification to the Digital Signature Algorithm (DSA, also known as El Gamal).
The purpose of this rambling treatise is to share effective techniques and experience that I've
gained over the years which can be immediately applied to practical information security scenarios
by both those that are relatively new to hacking crypto and the seasoned system administrator as
well. The primary goal is to present some tips and tricks that can be used to easily identify and
exploit weaknesses that are commonly found in custom and well-known cryptosystems in the wild.
Initially however, some preliminary historical information will be outlined, as it pertains to the
current state of cryptographical science today.
Only a small amount of prefunctory encryption knowledge is assumed; if the reader happens to lack
even this, then some introductory reading resources are recommended in order to get up to speed with
the material. If you're unfamiliar with the names "Alice", "Bob", and "Eve" (among others) when used
as generic wildcards in theoretical public key infrastructure (PKI) examples (similar to "foo",
"bar", and "baz" in code samples), chances are you should at least breeze through some reference
literature pertaining to the study of making/breaking ciphers on the web -- make sure you're
familiar with at least a few abstract concepts that are relative to the public key and private key
approaches to encryption.
If you've never even heard of public or private key crypto before, then before continuing with this
article you really should invest a bit of attention to at least the first several chapters from the
books which essentially represent the bibles of the crypto world: 'Applied Cryptography: Protocols,
Algorithms, and Source Code in C', and 'Practical Cryptography' (the second edition is called
'Cryptography Engineering'). 'Practical Cryptography' and its renamed counterpart are co-authored by
Niels Ferguson, while all of the fly named titles are authored by Bruce Schneier, the man who some
say is the closest thing that the digital security industry has to a rock star. Bruce maintains a
consistently updated blog on his web site: www.schneier.com. The source code for the older, yet
still extremely relevant piece 'Applied Cryptography,' is freely downloadable from his site.
Now that we've gotten the prerequisites out of the way, let's start off with some interesting
historical background. As most of you probably already know, a sender encoding some message with a
cipher in attempt to keep the real meaning a secret, while still enabling it to be read by the
intended recipient(s), has been utilized in the art of war for ages. A popular example that just
about every hacker has at least heard of is Julius Caesar's simplistic substitution or "rotating"
cipher; without a doubt, ROT13 is a permanent fixture in crypto lore and related subjects.
A more modern, yet less conventional employment of encoded messages was carried out by the United
States of America, which took advantage of a small segment within its indigenous population to
speak their traditional language in radio transmissions between battleships in the Pacific. This
Native American (or American Indian -- take your pick) language was more or less unknown to the rest
of the world. This fact, combined with the time and difficulty inherent in the process required to
conduct a linguistic analysis of the dialect by opposing forces, led to much success in the secret
communication of urgent war time agendas for the Americans. Quite appropriately, the indigenous
peoples that made such an effort possible became known forevermore as "code talkers."
Due to ongoing conflict across the globe, the mid-twentieth century witnessed many innovations in
code making and code breaking come and go. For example, messages transmitted via Continuous Wave
(CW) Morse code by the "Enigma" crypto machine, developed by Nazi Germany circa World War II, were
decrypted by the Western Allies after acute observation of permutation and group theory principles
by the notorious Charles Babbage. These machines were certainly pretty crude by today's standards
and became obsolete soon after their invention. Soon after, the U.S. enacted legislation which
consolidated previously existing defense organizations tasked with code-breaking during the war
effort into the National Security Agency (NSA) as it's known today. Since its inception, the NSA has
been first and foremost: a highly specialized government wing specializing in the practice of
cryptographic endeavors (which is evidenced by prominence of a key pictorial on the agency's seal).
Several decades later -- through the continuous progression of Moore's Law, extremely complex
cryptographic systems and algorithms became available residentially, most as notably Phil
Zimmerman's Pretty Good Privacy (PGP) e-mail encryption protocol began to enjoy widespread use. Not
long afterwards, with the growth of the World Wide Web and the vast increase of electronic commerce
transactions taking place over the Internet, Secure Sockets Layer (SSL) was coded into the fabric of
all popular web browser software as a transport layer security (TLS) mechanism, which just about
brings us to where we are now.
Realistically, it doesn't take a genius to be able to crack a cipher. An obscene amount of children
and housewives do it daily without realizing while solving the word jumble puzzles most commonly
found near the comics section in locally distributed newspapers. If one knows what to look for, it's
really not all that difficult to point at least a few weaknesses that exist in the majority of
cryptosystems used by Internet applications and other computer programs. Of course, some issues are
much more severe the others.
The remainder of this article will contain at least a brief description for each of a vast number of
encryption weaknesses commonly encountered on the Internet today. Typical encryption-related
security holes found on web servers and other daemons implementing SSL will be discussed as well as
some common misconfigurations that occur in the deployment of public key infrastructure on the
Internet in general.
Up until recently, one of the most commonly existing weaknesses in web servers featuring the
HyperText Transfer Protocol/Secure (HTTPS) method was the permission of SSL version two connections
from web browsers. SSLv2 has long been known to be vulnerable to a man-in-the-middle attack which
involves re-negotiating the same SSL handshake that is performed after a client connects to the
server's listening port. The third version of the SSL protocol was supposed to fix this weakness,
however it arose again in a slightly different form. It seems that the most recent versions of the
OpenSSL library no longer permit the re-negotiation of an SSL session at all. Not long ago, a new
stream could be negotiated with the server-side by simply pressing capital "R" within an ongoing
connection using the s_client program.
$ openssl s_client -ssl2 -connect google.com.:443
Another commonly found weakness involves the usage of weak encryption algorithms used to encrypt
data that transmitted over a Transport Control Protocol (TCP) connection encrypted by SSL. For
example, crypto algorithms classified as export-grade produce ciphertext which doesn't take long at
all to crack, even if a brute force search attack is being performed on computing hardware that is
considered relatively modest by today's standards.
$ openssl s_client -connect EXPORT40 google.com.:443
$ openssl s_client -connect EXPORT56 google.com.:443
Another weakness that commonly occurs on SSL servers is the use of the Electronic CodeBook (ECB)
mode of encryption. The problem here is that each ciphertext block corresponding to identical
plaintext will also be identical even after being encrypted. This is because the algorithm basically
remains in the same computational state from start to finish. After the initial block of text is run
through the cipher, if an identical block of text is encountered later on in the message, it will
yield the same exact ciphertext as the preceding equivalent block. Such a situation is called a
"known plaintext attack."
In general, crypto software configured for ECB mode is unacceptable. Especially since more secure
modes such as Cipher Block Chaining (CBC) and Cipher FeedBack (CFB) mode are usually available. CBC
and CFB modes take the results from previous encoding iterations and use them as input while
encrypting the remaining plaintext blocks. In this way, if two or more blocks with identical text
happen to appear in the input stream, then the ciphertext output between them will differ. This
makes cryptoanalysis a significantly more difficult job. Note that the described weakness in ECB
mode will not arise if the plaintext length happens to be less than or equal to the encryption
algorithm's block size. Nevertheless, Electronic CodeBook should still be avoided algorither, just
in case the key and/or input size change unexpectedly. Some extra efficiency isn't worth the loss in
data security.
Typically, daemons that provide SSL services are linked with the OpenSSL library. Usually, this is a
vulnerability in and of itself since the crypto security playing field is constantly in motion. It's
more than likely that a currently running service was compiled prior to the release of a new OpenSSL
release which patches one or more publicly known security holes. Depending on how a particular
daemon is setup, it's quite likely for the version number or other information about about the
utilized encryption library to be leaked through a client network connection.
Other popular application programming interfaces (APIs) for encryption include Mozilla's Network
Security Services (NSS), GnuTLS, Bouncy Castle, Beehive, the Microsoft .NET Framework's
System.Security.Cryptography namespace, etc. Although these are fairly common libraries, and each
exposes a wide range of crypto functionality, it is by no means an exhaustive list. As time unfolds,
vulnerabilities are publicized and patched in either a crypto API itself, or a specific unsafe usage
of it within a software package dependent upon one or more of the open source and/or commercial
crypto APIs.
For instance, an application could be using the latest version available of any given encryption
library and still risk data compromise as a result of improper key management, use of a weak
algorithm/mode/key/etc., as well as other mishaps that are mentioned later in this text. As an
example, consider an application encrypting lengthy input blocks via a weak block cipher mode, with
a private key value equivalent to a commonly used default password that's being transmitted over a
plaintext SOAP/XML web service (i.e. lacking both HTTPS and WS-Security), all while using an
algorithm that has thoroughly researched weaknesses -- think RC2-ECB and a 40-bit key.
Aside from promiscuously sniffing network traffic, which was possible for the example given in the
previous paragraph, there are many other situations that can lead to private key plaintext or
ciphertext exposure. Gaining access to the private key's plaintext is certainly preferred from an
attacker's perspective. It's not quite "game over," but knowledge of ciphertext relating to a
private key such as a Key Encrypting Key (KEK), or the ciphertext of the private key itself, takes
the attacker one step closer to cracking the plaintext private key. The usage of default keys for
block ciphers and private certificates available out-of-the-box in software products attempting to
take advantage of PKI are other possibilities. Lax filesystem permissions that permit the reading of
files containing private key/certificate material may occur.
Crypto cracking tools have evolved tremendously since the olden days of traditional wordlist-based
cracking of encrypted/salted user login passes with John The Ripper. It wasn't that long ago that
the ciphertext passwords for all users on a UNIX system (including root) was available to everyone
through the world-readable /etc/passwd file. Circa the early-to-mid 1990s, due to widespread
cracking of password files, almost all UNIX flavors quickly migrated to user password protection
based on the /etc/shadow file and PAM (Pluggable Authentication Modules). The seminal Linux
distributions that were becoming more and more available on CD-ROM media via mail order, and as book
inserts, quickly followed suit, especially with respect to PAM which allowed the fine-tuning of a
system's authentication behavior by configuring dynamic shared objects (DSOs) to be loaded for
modular addition of desired authentication functionalities.
Not long after, the crypt(3C) library function in Linux began to support MD5 as an alternative to
DES, which was now on its last legs of UNIX login authentication and other crypto applications as
well. (Note that in this context, DES refers to single DES, which is not to be confused with
TripleDES or 3DES, a much stronger algorithm based on the original single DES source code.) However,
the UNIX/Linux login crypto woes weren't over yet. The early 2000s witnessed the discovery of
several high-impact PAM vulnerabilities in Set-UID binaries allowing the loading of arbitrary DSOs,
i.e., shared library files that are usually compiled into a filename ending in a '.so' extension.
Typically, an executable binary will be dynamically linked with such a file. In the case of PAM,
since the DSOs were loaded at runtime via the dlopen() library function, a normal user could compile
a DSO that performed arbitrary actions while executing in PAM's privileged superuser context.
Around the same time that privilege escalation exploits were being discovered in the PAM modular
authentication system, other growing pains continued to materialize out of the crypt(3C) function
itself. For example, one weakness caused the initial password of an account to be encrypted with DES
despite the move to the MD5 as the default encryption algorithm for user passwords. This was
probably due to crypt(3C) requiring a special character sequence '$1