Discussion:
Kerberoast for John
(too old to reply)
Michael Kramer
2015-09-28 09:50:56 UTC
Permalink
Hello John Community,

I'm a Information Engineering Student from Germany and I've worked on a John the Ripper format plugin while I was working at SySS GmbH.

I wanted to share my work with the John Community. The work is based on the Kerberoast Python script from Tim Medin and I've ported it from there to C and then into John.

I had to cheat a little since the cracking algorithm of those Kerberos Tickets is not like the usual cracking John does. The first 16 bytes of the file will be used as a checksum, while the rest of the ticket will be hashed a few times with the cleartext password we are trying and then compared to the checksum itself. If this matches, our guessed password was the one the Ticket was encrypted with. Therefore I saved the first 16 bytes inside the salt variable John uses.

I've included the fmt_plug file for John, a testfile with 3 testhashes the module is able to crack, and also part of the python script from Tim Medin to parse kirbi files into the format my John module uses.

I've created the tickets I've tested my module with inside a VM and everything works with them.

Since I'm fairly new to John developement I haven't optimized my code yet.

But I've encountered a strange bug and thought maybe one of you could help me.

If I try to crack my test tickets with the ./john acrackfile call without any arguments (except the file of course) the performance is okay. It's not great, but it has the same performance over a long peroid of time.

But if I try to crack a larger file (let's say 300 tickets/hashes), John behaves strange. The performance starts very good (150000p/s) but then gets worse pretty fast. After 17 hours I had a performance of 500p/s.

Could anyone help me with this behaviour? Or at lest hint me what could cause it? I've already tried to check if there are any memory-leaks, but neither Valgrim nor a check of the used memory has shown anything.

Greetings,
Michael Kramer
magnum
2015-09-28 20:59:23 UTC
Permalink
Post by Michael Kramer
I wanted to share my work with the John Community. The work is based
on the Kerberoast Python script from Tim Medin and I've ported it
from there to C and then into John.
Cool, thanks!
Post by Michael Kramer
I've included the fmt_plug file for John, a testfile with 3
testhashes the module is able to crack, and also part of the python
script from Tim Medin to parse kirbi files into the format my John
module uses.
You should include all three as test vectors. After doing so, you'll
find that the format fails self-tests as written. It may crack that test
file but it's flawed and will not always work.
Post by Michael Kramer
But I've encountered a strange bug and thought maybe one of you could help me.
There are many bugs ;-) I think you need to do the following, for a starter:

1. Change BINARY_SIZE to 0 and replace binary with fmt_default_binary.
Have a look at some other format with a binary size of 0.
2. Change salt to a struct holding both the salt and what you are now
putting in the binary (so this becomes a "salt-only" format, or a
non-hash as we use to call them). Then of course change SALT_SIZE to
sizeof that struct.
3. Adjust everything accordingly. Drop the binary_hash/get_hash
functions (use fmt_default_* in the format struct).
4. Replace <openssl/rc4.h> with "rc4.h" (a local file in the tree)

BTW, I don't quite get what are you doing with saved_key in init()?

Also, you should rename src/kirbi_export.py to run/kirbi2john.py per our
conventions.

Finally, please base your contributions upon latest tree in
bleeding-jumbo branch of https://github.com/magnumripper/JohnTheRipper.
You are using an older version of the formats interface (last release I
presume). If you just fix the rest, I can take care of this.

Solar, the "Apache License" is fine, yes?

Thanks,
magnum
magnum
2015-09-28 21:03:18 UTC
Permalink
Post by magnum
Post by Michael Kramer
I wanted to share my work with the John Community. The work is based
on the Kerberoast Python script from Tim Medin and I've ported it
from there to C and then into John.
Cool, thanks!
Post by Michael Kramer
I've included the fmt_plug file for John, a testfile with 3
testhashes the module is able to crack, and also part of the python
script from Tim Medin to parse kirbi files into the format my John
module uses.
You should include all three as test vectors. After doing so, you'll
find that the format fails self-tests as written. It may crack that test
file but it's flawed and will not always work.
Post by Michael Kramer
But I've encountered a strange bug and thought maybe one of you could help me.
1. Change BINARY_SIZE to 0 and replace binary with fmt_default_binary.
Have a look at some other format with a binary size of 0.
2. Change salt to a struct holding both the salt and what you are now
putting in the binary (so this becomes a "salt-only" format, or a
non-hash as we use to call them). Then of course change SALT_SIZE to
sizeof that struct.
On another look, perhaps you could actually just switch salt and binary.
That 16 byte thing you currently use as a salt seems to be fine to use
as a binary. Then you'd just put most of cmp_all() in crypt_all() like a
normal format.

magnum
Michael Kramer
2015-09-28 21:14:39 UTC
Permalink
Post by magnum
Post by magnum
Post by Michael Kramer
I wanted to share my work with the John Community. The work is based
on the Kerberoast Python script from Tim Medin and I've ported it
from there to C and then into John.
Cool, thanks!
Post by Michael Kramer
I've included the fmt_plug file for John, a testfile with 3
testhashes the module is able to crack, and also part of the python
script from Tim Medin to parse kirbi files into the format my John
module uses.
You should include all three as test vectors. After doing so, you'll
find that the format fails self-tests as written. It may crack that test
file but it's flawed and will not always work.
Post by Michael Kramer
But I've encountered a strange bug and thought maybe one of you could help me.
1. Change BINARY_SIZE to 0 and replace binary with fmt_default_binary.
Have a look at some other format with a binary size of 0.
2. Change salt to a struct holding both the salt and what you are now
putting in the binary (so this becomes a "salt-only" format, or a
non-hash as we use to call them). Then of course change SALT_SIZE to
sizeof that struct.
On another look, perhaps you could actually just switch salt and
binary. That 16 byte thing you currently use as a salt seems to be
fine to use as a binary. Then you'd just put most of cmp_all() in
crypt_all() like a normal format.
magnum
As I said this was my first try at a John module :)
Thank you for the suggestions! I'll try them out and keep in touch after
I updated the files!

- Michael


---
Diese E-Mail wurde von Avast Antivirus-Software auf Viren geprüft.
https://www.avast.com/antivirus
Solar Designer
2015-09-28 21:28:02 UTC
Permalink
Hi Michael,

Thank you for the contribution!
Post by magnum
Solar, the "Apache License" is fine, yes?
Yes, it is, but our cut-down BSD license is preferable of possible:

http://openwall.info/wiki/john/licensing

Also, the Python script currently has no license statement - this should
be added.

Alexander
Michael Kramer
2015-09-28 21:29:53 UTC
Permalink
Post by Solar Designer
Hi Michael,
Thank you for the contribution!
Post by magnum
Solar, the "Apache License" is fine, yes?
http://openwall.info/wiki/john/licensing
Also, the Python script currently has no license statement - this should
be added.
Alexander
I wasn't sure which license I could use since Kerberoast is registered
under the Apache License. So I can just change to the BSD license?

- Michael

---
Diese E-Mail wurde von Avast Antivirus-Software auf Viren geprüft.
https://www.avast.com/antivirus
Solar Designer
2015-09-28 21:44:32 UTC
Permalink
Post by Michael Kramer
I wasn't sure which license I could use since Kerberoast is registered
under the Apache License. So I can just change to the BSD license?
If you build upon someone else's work closely enough that their
copyright (as well as your copyright) applies to your derived work, then
you have to list them as a copyright holder, and the license has to be
either their original license or a license that the original one can be
changed to (e.g., our cut-down BSD can be changed to an N-clause BSD,
but not vice-versa).

To answer your question more directly: no, you can't change from Apache
license to our cut-down BSD license, if what you have is a derived work
and the original author's copyright still applies. In that case, you
have to list them (Tim Medin?) as a copyright holder (along with
yourself), and keep their original license intact (mention it like you
did in the .c file).

However, it is unclear to me that what you have in the .c file is a
derived work. It looks like you're reusing analysis rather than reusing
code (or merely translating it from one language to another), and it
will deviate even further as you proceed to adjust the code as per
magnum's suggestions.

For the script, yours appears to be closer to being a derived work
(direct reuse of pieces of the script, right?) Did the original script
even have a copyright and license on it? If so, add those (and yours).
If not, ask the original author to add those, or re-code so that the
original author's copyright doesn't apply.

Thanks,

Alexander
Michael Kramer
2015-09-30 07:25:07 UTC
Permalink
Post by magnum
Post by Michael Kramer
I've included the fmt_plug file for John, a testfile with 3
testhashes the module is able to crack, and also part of the python
script from Tim Medin to parse kirbi files into the format my John
module uses.
You should include all three as test vectors. After doing so, you'll
find that the format fails self-tests as written. It may crack that test
file but it's flawed and will not always work.
I've encluded three test vectors now. It seems to work this time.
Post by magnum
Post by Michael Kramer
But I've encountered a strange bug and thought maybe one of you could
help me.
1. Change BINARY_SIZE to 0 and replace binary with fmt_default_binary.
Have a look at some other format with a binary size of 0.
Done that.
Post by magnum
2. Change salt to a struct holding both the salt and what you are now
putting in the binary (so this becomes a "salt-only" format, or a
non-hash as we use to call them). Then of course change SALT_SIZE to
sizeof that struct.
Done that as well.
Post by magnum
3. Adjust everything accordingly. Drop the binary_hash/get_hash
functions (use fmt_default_* in the format struct).
Done that as well :)
Post by magnum
4. Replace <openssl/rc4.h> with "rc4.h" (a local file in the tree)
If I replace this I get a segmentationfault. With the openssl/rc4.h it works. Any idea why that occurs?
Post by magnum
Also, you should rename src/kirbi_export.py to run/kirbi2john.py per our
conventions.
I've renamed and edited the license for the python script as well.

Attached you'll find the salt-only module and the renamed Python script.

But the bug I encountered before is still there. After 17 hours I get 500p/s...

Greetings,
Michael Kramer
magnum
2015-09-30 09:19:01 UTC
Permalink
Post by Michael Kramer
Post by magnum
4. Replace <openssl/rc4.h> with "rc4.h" (a local file in the tree)
If I replace this I get a segmentationfault. With the openssl/rc4.h it works. Any idea why that occurs?
Oh sorry, that's a bug in Jumbo-1. The bug is fixed in current tree on
GitHub. Leave it as openssl then.
Post by Michael Kramer
Post by magnum
Also, you should rename src/kirbi_export.py to run/kirbi2john.py per our
conventions.
I've renamed and edited the license for the python script as well.
Attached you'll find the salt-only module and the renamed Python script.
But the bug I encountered before is still there. After 17 hours I get 500p/s...
I'll have a look at it.

Thanks,
magnum
Frank Dittrich
2015-09-30 09:32:35 UTC
Permalink
Post by magnum
Post by Michael Kramer
But the bug I encountered before is still there. After 17 hours I get 500p/s...
I'll have a look at it.
Could that be single mode retrying correct guesses on other hashes?
magnum
2015-09-30 20:39:12 UTC
Permalink
Post by Michael Kramer
Post by magnum
Post by Michael Kramer
I've included the fmt_plug file for John, a testfile with 3
testhashes the module is able to crack, and also part of the python
script from Tim Medin to parse kirbi files into the format my John
module uses.
You should include all three as test vectors. After doing so, you'll
find that the format fails self-tests as written. It may crack that test
file but it's flawed and will not always work.
I've encluded three test vectors now. It seems to work this time.
Post by magnum
Post by Michael Kramer
But I've encountered a strange bug and thought maybe one of you could
help me.
1. Change BINARY_SIZE to 0 and replace binary with fmt_default_binary.
Have a look at some other format with a binary size of 0.
Done that.
Post by magnum
2. Change salt to a struct holding both the salt and what you are now
putting in the binary (so this becomes a "salt-only" format, or a
non-hash as we use to call them). Then of course change SALT_SIZE to
sizeof that struct.
Done that as well.
Post by magnum
3. Adjust everything accordingly. Drop the binary_hash/get_hash
functions (use fmt_default_* in the format struct).
Done that as well :)
Post by magnum
4. Replace <openssl/rc4.h> with "rc4.h" (a local file in the tree)
If I replace this I get a segmentationfault. With the openssl/rc4.h it works. Any idea why that occurs?
Post by magnum
Also, you should rename src/kirbi_export.py to run/kirbi2john.py per our
conventions.
I've renamed and edited the license for the python script as well.
Attached you'll find the salt-only module and the renamed Python script.
But the bug I encountered before is still there. After 17 hours I get 500p/s...
Thanks! I committed your patch as-is and then made significant changes
and enhancements in a separate commit:
https://github.com/magnumripper/JohnTheRipper/commit/05e5146
https://github.com/magnumripper/JohnTheRipper/commit/00bd1bb

On a core i5 laptop, speed went from 80K to 116K single-thread, and to
368K "many-salts" speed running 4 threads (HT).

You were using OpenSSL EVP, which is slow and not thread-safe. I bet
that bug was because of that, so it was probably squashed in the process.

To get a snapshot of bleeding-jumbo with this format, use:
https://github.com/magnumripper/JohnTheRipper/archive/bleeding-jumbo.tar.gz

magnum
Michael Kramer
2015-10-01 11:40:30 UTC
Permalink
Post by magnum
Thanks! I committed your patch as-is and then made significant changes
https://github.com/magnumripper/JohnTheRipper/commit/05e5146
https://github.com/magnumripper/JohnTheRipper/commit/00bd1bb
On a core i5 laptop, speed went from 80K to 116K single-thread, and to
368K "many-salts" speed running 4 threads (HT).
You were using OpenSSL EVP, which is slow and not thread-safe. I bet
that bug was because of that, so it was probably squashed in the process.
https://github.com/magnumripper/JohnTheRipper/archive/bleeding-jumbo.tar.gz
magnum
Thank you very much! I'll test it tomorrow as soon as I have access to
the files again. Will let you know if the bug is gone. And thanks for
the enhancements!

Michael

---
Diese E-Mail wurde von Avast Antivirus-Software auf Viren geprüft.
https://www.avast.com/antivirus
Solar Designer
2015-09-30 20:46:23 UTC
Permalink
Post by Michael Kramer
I've renamed and edited the license for the python script as well.
Thanks.

By "Kerberoast Scrypt", do you actually mean "Kerberoast script"?

Alexander
Michael Kramer
2015-10-01 11:41:17 UTC
Permalink
Post by Solar Designer
Post by Michael Kramer
I've renamed and edited the license for the python script as well.
Thanks.
By "Kerberoast Scrypt", do you actually mean "Kerberoast script"?
Alexander
Oh yes of course! Sorry for the typo.

Michael

---
Diese E-Mail wurde von Avast Antivirus-Software auf Viren geprüft.
https://www.avast.com/antivirus
Michael Kramer
2015-09-30 09:35:58 UTC
Permalink
Am Mittwoch, 30. September 2015 11:32 CEST, Frank Dittrich <***@mailbox.org> schrieb:
.
Post by Frank Dittrich
Could that be single mode retrying correct guesses on other hashes?
I don't think so. I only had one correct guess in the 17 hours, even though I had 2 "fairly" easily crackable passwords inside the list of 300. (I had the hashes from the test structure within the file. test123 is getting cracked right in step 2/3 but the other two didn't got cracked even after 17 hours, while it takes only 10-20 minutes if they are part of a file with 5 hashes.).

Greetings,
Michael
Michael Kramer
2015-10-02 07:52:48 UTC
Permalink
Post by magnum
Thanks! I committed your patch as-is and then made significant changes
https://github.com/magnumripper/JohnTheRipper/commit/05e5146
https://github.com/magnumripper/JohnTheRipper/commit/00bd1bb
On a core i5 laptop, speed went from 80K to 116K single-thread, and to
368K "many-salts" speed running 4 threads (HT).
You were using OpenSSL EVP, which is slow and not thread-safe. I bet
that bug was because of that, so it was probably squashed in the process.
https://github.com/magnumripper/JohnTheRipper/archive/bleeding-jumbo.tar.gz
magnum
Thanks again for fixing/enhancing my code! I was able to test it today and it works faster and better than before.

But I still encounter this strange bug. If I just use ./john <myfile>, the speed gets slower over time.

Some numbers:

0g 0:00:00:01 33.54% 1/3 (ETA: 08:32:05) 0g/s 405927p/s 405927c/s 405927C/s

0g 0:00:01:08 69.54% 2/3 (ETA: 08:33:40) 0g/s 43377p/s 535782c/s 535782C/s

0g 0:00:01:39 3/3 0g/s 30441p/s 528406c/s 528406C/s 011087..025246

0g 0:00:05:46 3/3 0g/s 10025p/s 542166c/s 542166C/s

0g 0:00:29:32 3/3 0g/s 3367p/s 543035c/s 543035C/s

0g 0:01:16:24 3/3 0g/s 2320p/s 526810c/s 526810C/s

Is this behaviour normal?
The file I've loaded has 311 hashes.

- Michael
magnum
2015-10-02 08:33:13 UTC
Permalink
Post by Michael Kramer
Post by magnum
Thanks! I committed your patch as-is and then made significant changes
https://github.com/magnumripper/JohnTheRipper/commit/05e5146
https://github.com/magnumripper/JohnTheRipper/commit/00bd1bb
You were using OpenSSL EVP, which is slow and not thread-safe. I bet
that bug was because of that, so it was probably squashed in the process.
Thanks again for fixing/enhancing my code! I was able to test it today and it works faster and better than before.
But I still encounter this strange bug. If I just use ./john <myfile>, the speed gets slower over time.
0g 0:00:00:01 33.54% 1/3 (ETA: 08:32:05) 0g/s 405927p/s 405927c/s 405927C/s
0g 0:00:01:08 69.54% 2/3 (ETA: 08:33:40) 0g/s 43377p/s 535782c/s 535782C/s
0g 0:01:16:24 3/3 0g/s 2320p/s 526810c/s 526810C/s
Is this behaviour normal?
The file I've loaded has 311 hashes.
If you look at the c/s or C/s figures, it actually gets faster. The
first stage is "single mode" which is expected to have a LOT better p/s
for many salts than any other mode, due to its design. All other modes
will have a c/s ~= (p/s / number of unique salts) and you can expect the
c/s figure to match the benchmark speed figure.

If anything, the c/s of stage 2 (wordlist + rules) is curious. It seems
to indicate stage 2 is slightly faster than incremental (stage 3). That
is normally not the case.

magnum
Michael Kramer
2015-10-02 08:46:39 UTC
Permalink
Post by magnum
Post by Michael Kramer
Post by magnum
Thanks! I committed your patch as-is and then made significant changes
https://github.com/magnumripper/JohnTheRipper/commit/05e5146
https://github.com/magnumripper/JohnTheRipper/commit/00bd1bb
You were using OpenSSL EVP, which is slow and not thread-safe. I bet
that bug was because of that, so it was probably squashed in the process.
Thanks again for fixing/enhancing my code! I was able to test it today and it works faster and better than before.
But I still encounter this strange bug. If I just use ./john <myfile>, the speed gets slower over time.
0g 0:00:00:01 33.54% 1/3 (ETA: 08:32:05) 0g/s 405927p/s 405927c/s 405927C/s
0g 0:00:01:08 69.54% 2/3 (ETA: 08:33:40) 0g/s 43377p/s 535782c/s 535782C/s
0g 0:01:16:24 3/3 0g/s 2320p/s 526810c/s 526810C/s
Is this behaviour normal?
The file I've loaded has 311 hashes.
If you look at the c/s or C/s figures, it actually gets faster. The
first stage is "single mode" which is expected to have a LOT better p/s
for many salts than any other mode, due to its design. All other modes
will have a c/s ~= (p/s / number of unique salts) and you can expect the
c/s figure to match the benchmark speed figure.
If anything, the c/s of stage 2 (wordlist + rules) is curious. It seems
to indicate stage 2 is slightly faster than incremental (stage 3). That
is normally not the case.
magnum
But isn't p/s the candidates/s? So the most interesting value, because it's the actual compares/s? Which means if p/s gets slower the cracking itself gets slower? Or am I understanding something wrong here?

- Michael
magnum
2015-10-02 16:10:30 UTC
Permalink
Post by Michael Kramer
Post by magnum
Post by Michael Kramer
But I still encounter this strange bug. If I just use ./john <myfile>, the speed gets slower over time.
0g 0:00:00:01 33.54% 1/3 (ETA: 08:32:05) 0g/s 405927p/s 405927c/s 405927C/s
0g 0:00:01:08 69.54% 2/3 (ETA: 08:33:40) 0g/s 43377p/s 535782c/s 535782C/s
0g 0:01:16:24 3/3 0g/s 2320p/s 526810c/s 526810C/s
Is this behaviour normal?
The file I've loaded has 311 hashes.
If you look at the c/s or C/s figures, it actually gets faster. The
first stage is "single mode" which is expected to have a LOT better p/s
for many salts than any other mode, due to its design. All other modes
will have a c/s ~= (p/s / number of unique salts) and you can expect the
c/s figure to match the benchmark speed figure.
If anything, the c/s of stage 2 (wordlist + rules) is curious. It seems
to indicate stage 2 is slightly faster than incremental (stage 3). That
is normally not the case.
But isn't p/s the candidates/s? So the most interesting value, because it's the actual compares/s? Which means if p/s gets slower the cracking itself gets slower? Or am I understanding something wrong here?
The C/s is "compares" per second if you will. The p/s is number of input
words tried per second. The latter is affected by number of salts.

Look at it this way: The "normal" speed here is something like 530K c/s
divided by the number of salts, which is about 1700 p/s.

Stage 1 (single mode) is "way faster than normal" for many salts so you
don't see much of that "divided by" thing. The other ones are normal. If
anything is weird with your numbers (compared to other crackers), it's
the crazy fast relative speed for stage 1.

The figures you see at each status print are the total averages for the
whole run, so after stage 1 is done the speed shown will decrease slowly
over time, approaching 1700 p/s (or whatever is the normal speed).

magnum

Loading...